WWDC Predictions

Yup, I’ll be at WWDC this year. With only 3ish weeks left, here are my predictions:

iPhone 4[S] Late October – Early November

  • World Phone. CDMA / LTE. Access any network, worldwide. True 4G.
  • 32/64GB Capacities. $199 / $299 respectively. Black / White.
  • NFC payment capabilities. Integration with Apple Retail Stores.
  • Apple A5 Dual Core CPU. Same 512MB RAM. Same battery life.
  • Edge-to-edge 4.0″ screen. Same resolution. Retina Display.
  • 8MP Camera. 1080p Video Recording.
  • Capacitive Home button area. Gestures.
  • “Slimmer”-ish.

iOS 5 Early October

  • iPhone 3G support dropped.
  • Notifications revamp.
  • New ways of multitasking. Expose. Multitouch gestures.
  • Dashboard.
  • Photo Booth.
  • Deeper voice integration.
  • Maps update. It’s a social network…for people.
  • iTunes cloud storage: music, apps (preferences?), books, video.
  • “Wireless Sync” via cloud storage.
  • iWork.

Rsync 3.0.8, Windows and chown

Upon updating cwRsync to it’s latest version, I found that my daily backup script was failing. I’ve found the last couple of updates to Rsync becoming less Cygwin / Windows friendly due to changes with setting ownership of users and groups on files.

In particular, in 3.0.8, there was a change that issued a transfer error if it’s not possible to set uid or gid with chown (http://rsync.samba.org/ftp/rsync/src/rsync-3.0.8-NEWS). Obviously, this is ALWAYS going to produce an error on Windows, and is the source of my problems.

While the backup would transfer successfully, the log would be filled with the error message “uid 4294967295 (-1) is impossible to set on …” and the final exit code would report non-zero, making my script think something had gone horribly wrong. This is the changeset that introduces this error: http://www.mail-archive.com/rsync-cvs@lists.samba.org/msg06292.html

So, I went scouring through the rsyncd.conf man page and through the 3.0.8 source to figure out how to skip setting uid/gid on a file. Turns out you can! Although through a weird option called “fake super”. By looking at the source I found Rsync will only attempt to chown if it’s “root”. By default, on Windows, Rysync thinks it IS “root”, although it may not be the case. The “fake super” option forces Rsync to think it’s not running as a real super user and therefore skips attempting to chown files.

So, in short, to get cwRsync (or just Cygwin Rsync) working properly on Windows, you should add the following lines to either your global or module configuration:

uid = 0
gid = 0
fake super = yes

Huzzah, no more errors!

Browser JavaScript Performance Shootout

Intel Core 2 Duo T9300 @ 2.50GHz, 4GB DDR2-667, NVIDIA GeForce 8400M GS
Windows 7 Ultimate x64 RTM January 2011 Updates
x86 Browsers, No Add-ons, No Plugins, Blank Profiles
Idle CPU, Browsers tested individually, Browsers re-opened after each run of a test

Mozilla Firefox 4.0 Beta 10, Mozilla Firefox 3.6.12, Opera 11.01, Google Chrome 8.0.552.237, Google Chrome 10.0.651.0, Microsoft Internet Explorer 8.0.7600.16385, Microsoft Internet Explorer 9.0.8023.6000, Apple Safari 5.0.3 (7533.19.4), Apple Safari 5.0.3 (WebKit r76750)





The Numbers

The point? Chrome is fast. Firefox 4 ISN’T slow. IE9 is actually promising.


A Little Firefox 4 Tweak: Extra Padding When Maximized

I have a pet peeve with certain applications that decide that the title bar padding at the top of the window is now a waste of screen real estate.

A Safari beta was one of the first browsers to introduce the idea of using the title bar to hold a row of tabs. At first thought, you may think that this really has no disadvantage. More viewing real estate for the page, who DOESN’T want that?! Well, Apple backtracked on the idea before Safari 4′s release. Why? Well, the reason isn’t entirely clear (probably it’s indirectly against the Windows User Experience Interaction Guidelines) but other browsers latched on to the idea and half of the major players did it all wrong. The guilty culprits? Chrome and Firefox.

So, what did they do wrong exactly? Well, let’s set the scene: your browser is maximized and your tab bar is full of tabs. You want to restore the window to it’s un-maximized state. How do you do it!? You can’t just double click (or in Windows 7, click and drag) anywhere on the title bar. That’s where the tabs are. In fact, if you do so you often find yourself creating a new tab or dragging the tab off the bar to create a new window. Instead, you’ll have to reach over to the Restore button (top right) or find a small gap in the tabs that counts as clicking the title bar instead of the tab. This takes more time and mouse precision. It’s a stupid way of implementing it and there’s such a simple fix.

Of the other major browsers, Internet Explorer and Safari leave the title bar completely blank and intact. Opera on the other hand did it properly. When maximized, above the tab bar is a 3 pixel gap that counts as the title bar. The space is barely noticeable but it allows you to whip your mouse up to the top of the screen, without any precision movements, double click and restore the window. In fact, you don’t even need 3 pixels, 1 is more than enough!

So, today I moved over to Firefox 4 hoping that all my add-ons are now compatible. Horray! They are… and Firefox 4 really is a huge performance improvement… BUT it lacks that space above tabs when maximized. Doing a quick click and drag to restore the window results in pulling the tab off the bar. So, I wrote a 3 line CSS fix that overrides the padding-top of the tab bar to 1px when maximized and “Tabs on Top” is enabled. I bundled it into an extension and submitted it to addons.mozilla.org.

The extension is called “Extra Padding When Maximized” and can be found here: https://addons.mozilla.org/en-US/firefox/addon/extra-padding-when-maximize/

Hopefully, the add-on won’t break in future updates. It’s such a minor modification. Hopefully the Mozilla team will get around to implementing it at some stage.

, ,

ZFS, zpool v28, OpenSolaris / OpenIndiana b147, 4k drives and you

I went on a mission to try out the latest zpool version…which is in OpenIndiana b147. FreeBSD 9 will be introducing zpool v28 but not until at least near year.

I bought myself 9 Samsung HD203UI drives. These are “green” drives that use “Advanced Format” 4k / 4096 byte sectors.

I decided to setup a benchmark to test whether or not the fix outlined by Solarismen.de that outlines a fix to the zpool command that sets ashift to 12 instead of 9.

I went ahead and checked out the OpenSolaris source (which hasn’t been touched for a while now) and compiled zpool with the fix. You can find a version of zpool, which supports ashift 12 and zpool v28, right HERE.

If you want to do it yourself you can follow these steps and then use zpool-12 to create your zpools:

cd /tmp
hg clone ssh://anon@hg.opensolaris.org/hg/onnv/onnv-gate
cd onnv-gate/usr/src/cmd/zpool

ln -s /usr/lib/libuutil.so.1 libuutil.so
ln -s /lib/libdladm.so.1 libdladm.so

Edit zpool_vdev.c
	verify(nvlist_add_uint64(vdev, ZPOOL_CONFIG_ASHIFT, 12) == 0);
	verify(nvlist_add_uint64(vdev, ZPOOL_CONFIG_IS_LOG, is_log) == 0);

gcc -O2 -DTEXT_DOMAIN='"en_US"' -I/tmp/onnv-gate/usr/src/cmd/stat/common -I/tmp/onnv-gate/usr/src/common/zfs -I/tmp/onnv-gate/usr/src/lib/libuutil/common -I/tmp/onnv-gate/usr/src/lib/libdiskmgt/common -c *.c
cd ../stat/common
gcc -O2 -DTEXT_DOMAIN='"en_US"' -I/tmp/onnv-gate/usr/src/cmd/stat/common -I/tmp/onnv-gate/usr/src/common/zfs -I/tmp/onnv-gate/usr/src/lib/libuutil/common -I/tmp/onnv-gate/usr/src/lib/libdiskmgt/common -c timestamp.c
cp timestamp.o ../../zpool
cd ../../zpool
gcc -o zpool-12 *.o -L. -lzfs -lnvpair -ldevid -lefi -ldiskmgt -luutil -lumem -ldladm
cp zpool-12 /usr/sbin

I also followed a rule of performance found by sub.mesa where that RAIDZ and RAIDZ2 should only contain an optimal amount of disks. A RAIDZ should contain only 3, 5 or 9 disks and a RAIDZ2 should only contain 6 or 10 disks, etc.

I compiled bonnie++ 1.96 and decided to run the benchmarks on stock (untuned ZFS), no compression, no dedup. 9 drives in RAIDZ1.

Server specs:
Intel Xeon X3440
8GB DDR3 1066
HP SAS Expander (2x uplink to HBA)
LSI 9211-8i
Supermicro X8SIL-V


Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
xobiusii-nas    16G   210  98 495776  54 312661  55   493  99 900717  52 900.3  20
Latency             46783us    1413ms     796ms   31988us   86977us     539ms
Version  1.96       ------Sequential Create------ --------Random Create--------
xobiusii-nas        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  6840  18 +++++ +++  9139  15  9368  23 +++++ +++ 10314  16
Latency             10390us     260us     128us    7648us      19us     131us


Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
xobiusii-nas    16G   211  99 576021  66 312749  56   478  99 921569  53  1184  23
Latency             42479us    1160ms     888ms   38757us     182ms     479ms
Version  1.96       ------Sequential Create------ --------Random Create--------
xobiusii-nas        -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16  9079  24 +++++ +++ 11747  19 10605  28 +++++ +++ 11247  21
Latency             10675us     185us     166us    7853us     128us     241us

I think the results show that in some areas the ashift has a significant boost in performance…but otherwise much the same. Best to apply the patch :) But also, I think that 4k sectors and ZFS has been greatly exaggerated. Sure, there seems to be a slight performance loss…but I think a lot of people having performance difficulties have a combination of both 4k sector drives and not having the magical number of drives in the RAIDZ/RAIDZ2 array. Note that by using ashift, you may have some difficulties importing zpool into another machine…maybe.

, , , ,

Unlimited Nudging and File Transfers for WLM 2011 15.4.3502.922

You heard it here first!

Just spent the last day or so messing in OllyDbg and IDA Pro to figure out where WLM 2011 limits simultaneous file transfers and delays between nudges.

Here’s the details:

Unlimited Nudging:
msnmsgr.exe Offset 0x003025F3
Change 8B87F0000000 to 33C040909090

Unlimited File Transfers:
msnmsgr.exe Offset 0x00224D82
Change 83F8037C to 909090EB

You will probably need to run A-Patch first as that patches a file that disables an integrity check on msnmsgr.exe.

I’ve contacted Ahmad at A-Patch about my findings…hopefully they’ll be in the next release :)

In the future…I hope to maybe port some of the other missing A-Patch features to WLM 2011 (like disabling Nudge shake)…but I also want to try and disable the new Microsoft link protection junk that occurs whenever you open an unverified link in your browser. Just give me a few more days ;)



Forcing Feed Updates in Google Reader…Automagically

Since 2006 I had been looking for an RSS client that provided synced articles between multiple computers. Google Reader existed but in it’s early stages had a piss poor feature set. Instead, I tried out an open source project, Tiny Tiny RSS. Even though the project was in it’s infancy, it was filled with features albeit a bit buggy. Problem is that TT-RSS needed to be hosted on my own server. If my server goes down, I get no RSS updates.

A couple of months I switched to Google Reader, and although it STILL isn’t as packed with features, there are enough GreaseMonkey scripts out there to make it bearable. Google Reader is now my primary RSS reader.

BUT, there’s still one MAJOR problem with Google Reader. Google determines when your feeds are updated. If a feed has many subscribers it’s likely to be updated more often than those that only have a few. If you have custom RSS feeds you’ll find that these feeds are only updated every 12-24 hours. This sucks.

But why does it suck you ask? Well, consider fast moving RSS feeds. For example, your Twitter stream. If you subscribe to Twitter RSS and follow a handful of users, the chances are you’re going to be getting a fair few status updates per hour. Now, if the Twitter RSS feed only holds the past 30 entries and you’re getting about 60 updates an hour….and Google is only updating your feed once every 12 hours (because YOUR Twitter feed is unique to YOU)…you’re missing out on more than 600 entries!

Google provides you with a “Refresh” button that allows you to manually update the feed…but there’s no button to manually update all your feeds and there’s no way to specifically set a refresh interval for fast moving feeds. I found this unacceptable…so I conjured up a little PHP script to force refresh all my feeds. This way I can run it as a cronjob and refresh my feeds every 10 minutes.

Here’s the script: http://digitaldj.net/greader_forceupdate.txt

What do you do with it? Simple. Rename the file, changing the .txt extension to .php then edit the file adding in your Google username and password in the relevant fields. Set it up to run as a cronjob (php greader_forceupdate.php) or call it manually via a web server. If you are calling it from a web server make sure you change the path of $cookies_file to somewhere that isn’t web accessible (i.e. not in htdocs, wwwroot, public_html, you get the idea).

How does it work? Also, simple. It logs into Google Reader, stores your cookies in a file named greader_cookies.txt, grabs a list of your feeds then calls the Refresh button for every feed in your list.

How do I know it worked? You should be able to pick any feed in your list and click “show details”. The last updated time should be recent for every feed.

Note that feeds WILL NOT update if they have been updated within the past 10 minutes, so there’s no point in running the script more often than once every 10 minutes.


Video Game Music I Fell In Love With

The entire Ridge Racer R4 soundtrack, in particularly Kouta Takahashi – Urban Fragments. The entire soundtrack is fantastic which is why I uploaded it here (ripped from the original PSX CD, 192kBps 44.1kHz),. There’s also a remix of the intro track: Hiroshi Okubo – One More Win (also in the uploaded pack).

The Jazz Jackrabbit (and the sequel) menu music. Original MOD music from here (and the sequel here).

All of the Sonic music. In particular the ending to Sonic & Knuckles (or Sonic 3 & Knuckles, same thing). The credits mashup is awesome. The original VGMs: Sonic 1, Sonic 2, Sonic 3 & Knuckles

The Doom 3 theme. Linky.

This one’s a little obscure. The menu music from a leaked UT2003 beta. The menu music in the final version of the game was different but the ogg was kept in the game files, although never used. I loved it the first time I launched the beta.. Oh the days of drooling over the graphics of CTF-Maul. Link to the musics.

The Metal Gear Solid VR Missions (or Integral) intro music.

The Half-Life 1 Closing Theme (Link) music….and the remix of it in Half-Life 2 (Double Link) apparently called Tracking Device by Kelly Bailey. The original is probably better.

And finally, everyone’s favourite from Portal. Still Alive.

A little less game related…The intro to Windows XP and 98 music.


Xobius II Scratchpad: How my new NAS will work

Time to upgrade my NAS, Xobius. Currently it has 10TB (made up of 2 RAID1s of 1TB and a RAID5 of 6x 1.5TB). All of it is full. This is my scratchpad for planning Xobius II (on the cheap). FYI a Drobo Pro is about $2k…and it holds an underwhelming 8 drives.


CPU: AMD Athlon II X4 635 $100

Why? I would normally go Intel but low-end Intel CPUs don’t support hardware virtualisation (see Operating Systems) or ECC RAM.

Case: Norco RPC-4224 $459

Why? This thing holds 24 3.5″ SATA drives…and for that, it’s damn cheap.

Memory: Kingston KVR1066D3E7SK2/4GI x2 $316

Why? Software RAID can use a lot of RAM when you have a ton of drives. I would usually go Corsair, alas Kingston are the only manufacturers that provide Unbuffered / Unregistered ECC RAM. Need ECC RAM to ensure no memory corruption in RAID but it needs to be unbuffered and unregistered as only server CPUs support it.

Motherboard: ASUS M4A89GTD PRO/USB3 $186

Why? Integrated graphics, gigabit LAN, 6x SATA III ports. It also has 2 x8 AND x4 PCIe slots so that I can chuck in 3 SAS HBAs. The two PCI slots will be used for an extra Gigabit NIC and a Wireless NIC (for bridging two networks).

Power Supply: Corsair TX-850 $178

Why? Corsair are awesome these days. Hard Disks use at max about 30W, 850W should be plenty.

SAS HBAs: 3x Supermicro AOC-USAS-L8i $480

Why? This is how the drives connect from the Norco SAS SFF-8087 ports to the system. I need a total of 6 ports for all 24 drives (4 drives per port), so I need 3 cards (at 2 ports per drive). All are PCIe x4 so I need to use the 2 x8 and x4 on the motherboard. These are only SATA II as the drives I will be using are only SATA II. If in the future I need to upgrade, since I’m only getting 20 drives total, I will have 4 bays free. I can connect these 4 drives directly to the motherboard (which has SATA III), or a controller in the  free PCIe x1 slot using a SFF-8087 to SATA breakout cable. Often users suggest using a HP SAS Expander which has 8x SFF-8086 SAS ports (6 to connect drives to card and 2 for uplink to the SAS HBA). I believe that by using this method you limit the amount of bandwidth you have as you are essentially plugging all 24 drives into a single SAS port (on the SAS controller). Using 3 controllers allows for maximum bandwidth allocation between drives.

Cables: 6x SFF-8087 to SFF-8087, 4x Male Molex to 7x Female Molex Power connects and maybe 1x SFF-8087 to 4x SATA breakout cable $100

Why? The Norco case connects drive by SAS cables from the backplane. 6 ports, 4 drives per port. 6 cables to connect all drives to the system. Need many power connectors to power all the drives on the backplane and might need a SFF-8087 to 4x SATA cable if I wish to connect SATA III drives directly to the motherboard.

Drives: 6x 1.5TB Seagate Barracuda 7200.11, 2x 1TB Samsung HD103UJ, 2x 1TB Hitachi 7K1000, 10x 2TB Hitachi 7K2000 $1500

Why? The only new drives will be the 2TB Hitachi drives. They’re cheap and fast…and NOT green power (TLER doesn’t matter in Unix-based systems…as much). Who likes green power drives anyway? Remember RAID used to stand for Redundant Array of INEXPENSIVE Disks. These are great value for performance.

Operating System

I’m a Windows fanboy, so I really wanted to use Windows. My current software RAID5 Windows based, so keeping Windows would have been easiest. Alas, OpenSolaris and ZFS seems to be the in thing these days for RAID. So, the plan is to run BOTH Windows and OpenSolaris using VMware ESXi. The SAS HBA seems to be compatible with both VMware ESXi and OpenSolaris. The CPU is 64-bit and supports hardware virtualization, perfect for ESXi. So, the plan is to allocate 2GB of RAM to Windows Server 2008 R2 (Datacenter) and 6GB to OpenSolaris. OpenSolaris will deal only with ZFS RAID while Windows will provide all my network services (WAMP, OpenVPN, etc.)

RAID Configuration

6x 1.5TB (RAID-Z2 RAID6)

10x 2TB (RAID-Z2 RAID6)

4x 1TB (RAID-Z2 RAID6)

250GB OS Drive

All raids pooled so that shows as single drive. Software RAID so that I can migrate the RAID to new hardware if a controller fails.

TOTAL: 24TB (RAID6) 28.5TB (RAID5)


Google Instant Sucks!

It’s true. It really does. It’s flawed. I like the functionality but part of the implementation is frustrating. Let’s face it, when I saw it for the first time I was like OMGWTFBBQ, but now, not so much.

Going Back

This one’s a simple matter of placing focus in the Search Bar. Often I find myself browsing a site, searching for a keyword, visiting a search result then going back to the site I was originally visiting. For example:

  1. Browse site A
  2. Search for “test test”
  3. Go to first result
  4. Go back to “test test”
  5. Go back to site A

This now becomes a nuisance with Google Instant. As soon as you go back from a search result you are focused into the search query box. You need to tab out or click the back button to actually go back to the page before your search results. Often you just end up deleting a character in your query.

Auto Completion

This is just silly. I thought the whole reason of Google Instant is to save time. Often Google’s prediction is pretty damn good. Now I find myself typing part of the search query (because Google already predicted it) and pressing Enter! Instead I find that I’ve ended up searching for my partial search query rather than the prediction. For example, searching for “google maps”:

  1. Type “google m”
  2. Press Enter

Instead, pressing Enter should use the autocompleted result (instead of me having to press Right Arrow THEN Enter). If you want to search for the query “google m” you press Escape. OR the other way around…Escape hides the Google Instant box and keeps it’s current prediction state and Enter searches for your incomplete query. Either solution requires just one button press instead of two.

I guess you could ask…why am I pressing Enter at all? If I type “google m” it shows me the results for the predicted text. Well, two reasons: a) I’m too damn used to pressing Enter after my query, and b) I want to get rid of the gigantic Google Instant box that hides at least 2 search results!

Take Me There

Complementing my previous problem (sort of). There NEEDS to be a keyboard shortcut to take me to a result. Be it the first result or result X on the page. So I could:

  1. Type “google m”
  2. Press keyboard shortcut to take me to first result

I propose ALT + x (x being the search result number). Time saving? I think so.

Browser Integration

Google Instant needs browser integration ASAP. No doubt Chrome will have some sort of implementation….but I don’t use Chrome, I use Firefox. A possible implementation? Well I’m glad you asked. Pretty simple:

Typing into the Firefox Search Box brings up a new tab with Google Instant results. Typing into the search box works exactly the same way as the Google Instant box….oh and it should have support for choosing a search result with keyboard shortcuts too :P

DOM Storage

Initially, Google Instant hated me. When it launched, I simply could not get it to work with Firefox despite disabling all extensions (Safe Mode), clearing cache and cookies! It turns out Google Instant uses DOM Storage. If you happened to use a Firefox extension named BetterPrivacy (which is pretty popular!) you ended up with a broken Google page. Even if you disabled the extension, you’d still have the problem since the extension is simply a frontend for changing a flag in about:config. Either way, why on earth does Google Instant need to use DOM Storage? There’s probably a good reason for this, but surely if DOM Storage fails they could revert to Cookies…or something.

Anyway, for those having this issue:

Error: Permission denied for <https://www.google.com> to get property Window.getComputedStyle from <http://www.google.com.au>.
Error: uncaught exception: [Exception... "Security error"  code: "1000" nsresult: "0x805303e8 (NS_ERROR_DOM_SECURITY_ERR)"  location: "http://www.google.com/extern_js/f/CgJlbiswRTgBLCswWjgALCswDjgBLCswFzgHLCswJzgELCswPDgALCswUTgALCswWTgOLCswCjh_QC8sKzAWOB0sKzAZOCAsKzAhOD9AASwrMCU4z4gBLCswKjgLLCswKzgRLCswNTgELCswQDgTLCswQTgFLCswTjgGLCswVDgBLCswYzgALCswHThXLCswXDgXLCswGDgFLCswJjgOLIACGZACG/Veq5STkSvqE.js Line: 109"]

You can fix it by:

  1. Going to “about:config” in your Firefox address bar
  2. Search for “dom.storage.enabled” and set it to true
  3. Huzzah. Google Instant.


Why the hell can’t Google roll out Instant to international domains at the same time? I want Australian results! There is no excuse for this.

, ,