Thursday, March 8, 2012

Ubuntu Oneiric on an Asus X53Z

In the hope that this blog post will save somebody a little time...

The computer came with a 500 GB hard drive with Windows 7 pre-installed. The drive was partitioned in roughly this manner:
  1. "RECOVERY" partition, 16GB. Full of Windows 7 drivers.
  2. "OS" partition, about 200 GB. Contained a live install of Windows 7.
  3. "DATA" partition, rest of the disk.
I decided not to touch the "RECOVERY" partition. Let Windows 7 claim another  3.2% of my available space. It's just a microsoft tax, don't fret...

Then I shrank the "OS" partition down to 60 GB. Which is more than enough, really. I install all non-MS software on the "DATA" partition, and I keep all my documents, music, photos, code etc. on there as well. How to shrink the "OS" partition? Well I used the MiniTool Partition Wizard, but I have since learned that Windows 7 is quite capable of doing this for you. No need for 3:rd party s/w any more. Simply right click "My Computer", which you should be showing on your Desktop. Select "Manage". Navigate to "Storage > Disk Management", right click on the "OS" partition and select "Shrink Volume".

Having shrunk the "OS" partition I freed up some 100+ GB for Ubuntu to play with. Time to download the Oneiric amd-64 iso! At the time of writing this, you can find the image here. Burn this image to a disk. You might want to experiment with creating a bootable USB disk, but there are several gotchas so I can't really recommend that option.

Boot from the Oneiric disk. Now, here is your first gotcha! Oneiric will not play with your Realtek 8168 wired connection, so your choices are either (1) install offline or (2) install online using a wlan or mobile internet connection. You may choose to install the base system offline, however, you will not be able to update and/or install anything unless you have unwired internet access.

Now, when you're done installing and have rebooted into Oneiric, this is what you need to do to get wired internet working. See this blog post for details. Warning! This assumes you have internet access, so you'd better sort that out.

Then you need to follow this wiki page to get 3D working. This is really not an option if you need any of Dual Head, full Unity 3D functionality, support for your adapter's native 1366x768 resolution or just about any kind of 2D acceleration. Ubuntu-supplied fglrx/catalyst drivers (via restricted drivers) just did not work for me. (This may have changed at the time you're reading this)

Happy Oneiricing on your spanking new laptop!

Sunday, January 30, 2011

The Indium Includer and Class Loader

Indium comes with its own includer class called Core\Includer. This class contains a thin set of wrappers around PHP:s native include language construct. In an Indium application, every file system operation that depends on the include_path ini setting should be routed through Core\Includer. Of course, an application may choose to use include or file_get_contents directly, but this is neither supported nor encouraged.

Why does Indium encourage applications to wrap includes?
  1. First and foremost, we want to fight a potentially dangerous "feature" of PHP:s includer. If the specified file cannot be found in include_path, PHP will silently search the current directory and the directory in which the calling script resides. This behavior has unpredictable (run time dependent) security implications, encourages the generation of implicit dependencies and imposes an unnecessary cost in terms of file system operations. The only way to turn this "feature" off is to specify an absolute path, or a path which is relative to the current directory. Core\Includer scans include_path to determin the absolute path of includes before calling on PHP:s native includer.
  2. Controlling the scope inside which the include takes place. For reasons of encapsulation and security, Indium has to be able to decide what is visible to an included view.
  3. PHP:s includer is a swiss army knife. Wrapping and controlling it helps us in identifying common include idioms and providing standard, Indium blessed methods for carrying them out.
  4. We love classes. We want to encourage application writers to use classes. Applications should rely on Indium's Core\ClassLoader for autoloading.
  5. Lastly, wrapping includes enables us to impose strict validation on file names, which helps with tightening up security.


Okay, that's the Indium includer covered, on to the class loader. The Indium class loader provides a unified and highly configurable interface to class loading. It hooks into PHP SPL:s autoloading infrastructure and sits on top of Core\Includer. Essentially, the class loader provides a mapping from ( namespace path, class name ) tuples to include paths. Include paths may be searched to any specified depth. In addition the class loader has "magic" hooks for Indium exceptions and interfaces; these are automatically searched for in subdirs named "exceptions" and "interfaces". The best thing about Indium's class loader and includer is that the appplication need not really care about them. As long as your put your controllers and models in APPLICATION_PATH/controllers and APPLICATION_PATH/models, things are guaranteed to just work.

Core\Includer and Core\ClassLoader has proven to be a very strong and robust combination which gives a reliable and secure foundation to the Indium framework.

Tuesday, January 11, 2011

Ubuntu on an Acer Aspire 5000


Okay, this is just me posting the kind of blog post I wish I could have googled for before I installed Linux on my laptop. Would have saved me some time... I hope it comes in handy to someone else.


Hardware: [1]
  • Processor: Mobile AMD Turion 64 1600MHz
  • Memory: 1GB DDR SDRAM, 128MB of which dedicated to video.
  • Graphics adapter: SiS M760GX
  • Audio: RealTek ALC203 AC 97 Codec
  • Ethernet adapter: SiS900 PCI Fast Ethernet
  • Wireless network adapter: Broadcom Corporation BCM4318
  • Storage: Seagate ST9100822A Momentus 4200.2 100 GB ATA

Partition layout:



In retrospect, 4GB swap was really a mouthful since in practice swap is rarely touched. I could have gone by with 2GB for hibernation, but at least now I have room for that 2GB RAM upgrade...

32 GB for the root file system is a lot. I tend to accumulate a lot of crap in ~ so it's nice to have a safety margin. I made sure that /opt is large enough to contain a copy of /home in case I need to do something drastic with root.

Works out of the box:

  • Sound. Audiophiles would probably not buy this computer but the on board codec does the job for me.
  • Graphics, Xorg plays along just fine at native 1280x800 resolution. No configuration needed. No DRI, slow 3D!
  • Ethernet adapter.
  • Hibernation just works.
  • Touchpad. The first thing I did was disable "tap to click". My fine motor skills are simply inadequate for that.
  • 896 MB of memory is not terribly much but I find it surprisingly hard to provoke the machine into swapping.

Issues:
  • Fan control is borked in 10.10 (Maverick)! For some reason, the fan kicks in too late to prevent the CPU from transferring lethal amounts of heat to the graphics adapter, which overheats and freezes the computer. Solution: download Ubuntu 10.4 and hope this gets fixed in Natty.
  • Onboard SiS graphics adapter does not play nicely with frame buffer console. Solution: blacklist frame buffer console by appending the line "blacklist vga16fb" to /etc/modprobe.d/blacklist.conf
  • Wireless is some proprietary Broadcom crap which does not work out of the box. Solution: use wired connection, run jockey-gtk and install b43-fwcutter Wireless should be up on reboot.
Other than the stuff Casper installs for you, I installed sensors-applet for monitoring the temperature. Anticipating the Maverick move from f-spot [2].  I also installed the shotwell photo manager. For PHP development I installed apache2 and php5. I'm currently evaluating Netbeans 7 Beta [3] as my IDE, so I have installed it in my user directory. NB 7 needs the Sun JDK which I downloaded and installed to /opt/java

Other software sources I have added:
That's it, really. I will add content to this post whenever I stumble over some new issue.




Footnotes:
  • [1] http://support.acer.com/acerpanam/notebook/0000/Acer/Aspire5000/Aspire5000sp2.shtml
  • [2] http://linux.slashdot.org/story/10/06/14/0055221/Ubuntu-Replaces-F-Spot-With-Shotwell
  • [3] http://netbeans.org/community/releases/70/

Sunday, July 20, 2008

Compression and decompression shoot out

New benchmark!

I decided to do another test suite, this time including decompression. The data is a concatenation of two tar archives. The first tar contains 3+ GB of C:\Program Files from a Windows XP installation. The second tar is a 3+ GB Ubuntu installation, sans /usr which is on a separate partition. In total this amounts to 6.53 GB.

The test host is my girfriend's 3 GHz P4 with 1GB RAM and 2MB L2 cache. It is rated at ~ 6000 bogomips. The computer is running Ubuntu 8.04 with a custom 2.6.25.4 kernel. All files were on an fuse mounted ntfs partition.

In addition to processing time I tried to measure RAM usage with GNU time but I didn't get any meaningful results. I did manage to record page faults,and I may add those statistics later.

This time lha is out. It doesn't seem to like files this big. Since I was also going to benchmark decompression I decided to include lzo, which is allegedly very fast at decompression.

Complete list of the contestants:

  • bzip2 1.0.4 (-9k)
  • gzip 1.3.12 (-9c)
  • lrzip 0.23 (-w 9 -q)
  • LZMA SDK 4.43 (-9kq)
  • LZO library 1.08 (-9 -k)
  • RAR 3.71 (-m5)
gzip was invoked in redirect mode (-c) because I didn't want it to throw away the source file. This shouldn't really affect compression ratio or processing times.

Below are the results broken down into compressed size and ratio, compression time and decompression time.

Compressed size, from worst to best:

  • uncompressed: 7,011,041,280 (6.53 GB), ratio 1.000
  • lzo_________: 4,719,645,902 (4.40 GB), ratio 1.486
  • gzip________: 4,563,292,811 (4.25 GB), ratio 1.536
  • bzip2_______: 4,428,910,323 (4.12 GB), ratio 1.583
  • rar_________: 4,125,923,141 (3.84 GB), ratio 1,699
  • lzma________: 3,840,213,621 (3.58 GB), ratio 1,826
  • lrzip_______: 3,585,069,056 (3.34 GB), ratio 1.955

Wall clock compression time, from slowest to fastest:

  • lzma_: 8,409 s
  • lrzip: 7,904 s
  • rar__: 5,906 s
  • lzo__: 3,487 s
  • bzip2: 3,034 s
  • gzip_: 1,598 s
  • cat__: __111 s (for reference)
Wall clock decompression time, from slowest to fastest:

  • lrzip: 2,830 s
  • bzip2: 1,491 s
  • lzma_: __981 s
  • rar__: __604 s
  • gzip_: __503 s
  • lzo__: __449 s
  • cat__: __111 s (for reference)



The shell script used to gather these statistics is here.

Wednesday, June 4, 2008

Compression shoot out

I had some time on my hands the other day and a 2.9/4 GiB debian lenny ext3 filesystem sitting rather idly on one of my hard drives (This fs serves as a backup OS in case my bleeding edge gentoo bellies up on me). This fs includes Xorg, Firefox, Xfce4, GNOME among other installed packages. Having just observed the coreutils distribution switch to lzma I was really curious about lzma's performance. A file system filled with linux applications and data was a good candidate for compression testing. I decided to round up gzip, bzip2, lha, zip, rar, lzma and lrzip to see what they were made of.

I copied the blocks to a file using dd, loop mounted the file and started copying from /dev/zero to a file on the loop mounted fs so as to prepare the unused blocks with redundant data. Eventually the dd command stopped with a file system full warning, so I synced the fs and deleted the zero stuffed file. I unmounted the looped back fs and started compressing...

Several CPU hours later I had an impressive amount of digits which I have decided to share with you through this blog. Here are the hard numbers:

Original file: __4,293,563,904 (3.999 GiB) (4096 MiB down to nearest cylinder boundary)
Zero file size: _1,150,640,128 (1.072 GiB)
Net fs size: ____3,142,923,776 (2.927 GiB) (This number is used as the uncompressed size)
fs usage per df: 3,065,671,680 (2.856 GiB)
ext3fs overhead: ___77,252,096 (2.458 %)

compressor _________size ______________ratio__________CPU usage
none_______3,142,923,776_(2.9271 GiB)_1.0000

gzip_-9____1,177,245,158_(1.0964 GiB)_2.6697______593s (sys 24s)

zip_-9_____1,177,245,231_(1.0964 GiB)_2.6697______554s (sys 25s)

lha_-o7____1,152,406,262_(1.0733 GiB)_2.7273____1,492s (sys 39s)

bzip2_-9___1,082,698,303_(1.0083 GiB)_2.9029____2,316s (sys 25s)

rar_-m5______942,002,518_(0.8773 GiB)_3.3364____3,481s (sys 30s)

lzma_-9______871,912,252_(0.8120 GiB)_3.6046___10,334s (sys 23s, wall 10,471s)

lrzip_-w9____849,062,862_(0.7908 GiB)_3.7016____4,483s (sys 35s, wall 7,000s
)

(Sorry about the formatting...)

Wall clock times are not much different from user times except for lzma and lrzip. This is because these beasts use large windows which results in swapping.

Observations:

  • gzip and Zip use the same algorithm, ie. PKZIP deflate. The file size difference is in headers. Zip compression is noticeably faster.
  • lrzip spent 42 minutes (!) swapping. I need more RAM!
  • There is an almost perfect correlation between CPU time spent and compression ratio achieved. The exception is that lrzip is a lot quicker than lzma (which lrzip uses, incidently) at highest settings.
  • It would be interesting to measure RAM usage during compression, but I haven't found a simple way to do that.
  • It would be interesting to benchmark decompression too. To be done.

Bench mark host and contestants information:

AMD Athlon XP 2400+ @ 2GHz,256 kB L2
1 GB DDR PC2700 @ 266 MHz
Linux 2.6.24

  • gzip 1.3.12
  • Zip 2.32
  • LHa 1.14i-ac20050924p1
  • bzip2 1.0.5
  • RAR 3.71
  • LZMA SDK 4.32.6
  • lrzip 0.23
All built on host with gcc 4.2.3 using CFLAGS="-march=athlon-xp -O2 -pipe".