I have a new Lenovo T430 with Nvidia/Intel hybrid graphics. The Intel card is for power saving, the Nvidia card for performance. The easiest way to handle this setup is to just choose the correct card on bootup, but this is inconvenient. Bumblebee provides a way to use the Intel card most of the time, but use the Nvidia for high-performance tasks such as games, and ONLY for those apps. Once it’s set up, it works pretty well, except for a few apps that require additional tweaking, such as Steam, or Wine.

One of the apps that requires tweaking is VMware Workstation (I’m running WKS 9). cmillersp provided a great write-up on the VMware Communities, which is what I set up on my system. VMware Workstation runs great – I can load OpenGL 3D apps in VMs and everything runs fantastic.

Except that I run VMs all the time, and using the Nvidia card all the time kills the performance benefit of the Intel card twice: once because I’m using the Nvidia card, and once again because BOTH GPUs are running at the same time. So I wanted a way to dynamically choose which card to run VMware under, based on whether I was on AC power or battery power.

The result is the attached script below, which I’ll be submitting to the Bumblebee wiki / project, as well as the VMware forums. This is version 1, which works as follows:

  1. Must be installed by “sudo ~/bin/vmware –install” – it will make the cd /usr/lib/vmware/bin; mv vmware-vmx vmware-vmx.real and then do the script creation at /usr/lib/vmware/bin/vmware-vmx, so that it’s a single portable script.
  2. Takes several options to force using the Nvidia card or not
  3. If no force option is applied, automatically determines AC adapter state or battery charge/discharge state
  4. Based on the above, decides whether to launch /usr/bin/vmware normally, or via the instructions from cmillersp

I’ve done some additional work to make it run via either “primusrun” or “optirun”. Optirun is much slower, but at least functional with fewer installs. I hope this is useful to someone else!

Edit: 2012-12-17 – v1.1 added gksu auto-detection
Edit: 2014-01-08 – Moved to github repo: github.com/docsmooth/vmware-bumblebee

I built a stock Solaris 10 VM in the lab a few weeks ago. After adding the fifth zone to it to host a NIS domain, it started suffering “out of memory” errors like the following:
May 3 14:30:59 sol10-a tmpfs: [ID 518458 kern.warning] WARNING: /tmp: File system full, swap space limit exceeded
May 3 14:31:53 sol10-a tmpfs: [ID 518458 kern.warning] WARNING: /zones/sol10-z5/root/etc/svc/volatile: File system full, swap space limit exceeded
May 3 14:32:10 sol10-a tmpfs: [ID 518458 kern.warning] WARNING: /tmp: File system full, swap space limit exceeded
May 3 14:32:34 sol10-a genunix: [ID 470503 kern.warning] WARNING: Sorry, no swap space to grow stack for pid 5650 (cron)

This is easy to fix if you know Solaris administration, but if not:
-bash-3.00# swap -l
No swap devices configured

This is a problem. The simple answer is to add more swap space, staring with a new disk to *host* the swap space. I added a 2GB thin-provisioned disk to the ESX VM, rebooted it, then ran the following:
-bash-3.00# devfsadm
-bash-3.00# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
0. c1t0d0 <default cyl 4092 alt 2 hd 128 sec 32>
/pci@0,0/pci15ad,1976@10/sd@0,0
1. c1t1d0 <default cyl 2085 alt 2 hd 255 sec 63> zones
/pci@0,0/pci15ad,1976@10/sd@1,0
2. c1t2d0 <default cyl 1021 alt 2 hd 128 sec 32>
/pci@0,0/pci15ad,1976@10/sd@2,0
Specify disk (enter its number): 2
selecting c1t2d0
[disk formatted]

FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> fdisk
No fdisk table exists. The default partition for the disk is:

a 100% "SOLARIS System" partition

Type "y" to accept the default partition, otherwise type "n" to edit the
partition table.
y
format> part

PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> print
Current partition table (original):
Total disk cylinders available: 1020 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 1019 1.99GB (1020/0/0) 4177920
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 2.00MB (1/0/0) 4096
9 unassigned wm 0 0 (0/0/0) 0

partition> 0
Part Tag Flag Cylinders Size Blocks
0 unassigned wm 0 0 (0/0/0) 0

Enter partition id tag[unassigned]: swap
Enter partition permission flags[wm]:
Enter new starting cyl[1]:
Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: 2g
`2.00gb' is out of range
Enter partition size[0b, 0c, 1e, 0.00mb, 0.00gb]: 1.99g
partition> print
Current partition table (unnamed):
Total disk cylinders available: 1020 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks
0 swap wm 1 - 1019 1.99GB (1019/0/0) 4173824
1 unassigned wm 0 0 (0/0/0) 0
2 backup wu 0 - 1019 1.99GB (1020/0/0) 4177920
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0
8 boot wu 0 - 0 2.00MB (1/0/0) 4096
9 unassigned wm 0 0 (0/0/0) 0
[611/1860]
partition> label
Ready to label disk, continue? y

partition> quit

FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
fdisk - run the fdisk program
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit
format> label
Ready to label disk, continue? y

format> quit
-bash-3.00# swap -a /dev/dsk/c1t2d0s0
-bash-3.00# swap -l
swapfile dev swaplo blocks free
/dev/dsk/c1t2d0s0 32,192 8 4173816 4173816
-bash-3.00# echo "/dev/dsk/c1t2d0s0 - - swap - no -" >> /etc/vfstab

To recap:
devfsadm
format
2
fdisk
y
part
print
0
swap

1.99g
label
y
quit
label
y
quit
swap -a /dev/dsk/c1t2d0s0
swap -l
echo "/dev/dsk/c1t2d0s0 - - swap - no -" >> /etc/vfstab

yes, one of those is a blank line to accept the default cylinder “1”.
The info for this post was taken very directly from UtahSysAdmin.com. Huge thank you to Kevin for his post, which I needed to modify slightly to get my VM running.

EDIT: I am currently unsure if the last “echo” statement is right. After a recent reboot, swap wasn’t mounted untill I removed that entry from /etc/vfstab. Soliciting comments. Thanks!

Because not enough information exists in easy-to-find searches: as a simple reminder – SCSI device IDs can and will change.

A few months ago I hot-added a new disk to an ssh bastion host (a VM on ESX). As these things tend to go, I eventually took a maintenance window and updated firmware/BIOS/OS on the ESX host. When the bastion VM came back online, however, I was presented with an odd error:

[root@bastion ~]: fsck /dev/sdc1
e2fsck 1.39 (29-May-2006)
fsck.ext3: Device or resource busy while trying to open /dev/sdc1
Filesystem mounted or opened exclusively by another program?
[root@oracle1 ~]# cat /proc/mounts
rootfs / rootfs rw 0 0
/dev/root / ext3 rw,data=ordered 0 0
/dev /dev tmpfs rw 0 0
/proc /proc proc rw 0 0
none /selinux selinuxfs rw 0 0
devpts /dev/pts devpts rw 0 0
tmpfs /dev/shm tmpfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0
[root@oracle1 ~]# cat /etc/fstab
/dev/main/root / ext3 defaults 1 1
/dev/sdc1 /home ext3 defaults 1 2
/dev/main/var /var ext3 defaults 1 2
/dev/main/tmp /tmp ext3 defaults 1 2
LABEL=/boot /boot ext3 defaults 1 2
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/main/swap swap swap defaults 0 0
# Beginning of the block added by the VMware software
.host:/ /mnt/hgfs vmhgfs defaults,ttl=5 0 0
# End of the block added by the VMware software

So everything in the fstab is how I left it – /dev/sdc1 is the new disk I added that is giving errors mounting. So I thought to check for corruption on the disk, and found the problem:

[root@oracle1 ~]# fdisk -l
Disk /dev/sda: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 5221 41833260 8e Linux LVM
Disk /dev/sdb: 42.9 GB, 42949672960 bytes
255 heads, 63 sectors/track, 5221 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 5221 41937651 83 Linux
Disk /dev/sdc: 32.2 GB, 32212254720 bytes
255 heads, 63 sectors/track, 3916 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdc1 * 1 3917 31457279+ 8e Linux LVM

So, a simple fix – change “/dev/sdc1″ to “/dev/sdb1″ in /etc/fstab (or to VOLUME=home), and boot back up.

It’s not something that’ll probably happen on this server again, but it is something to be aware of, on both VM guests and on physical servers. This is why so many newer Linux OSes are using UUID= or VOLUME= instead of device path for SCSI disks.