Ars Technica    Ars OpenForum 3.0b  Hop To Forum Categories  Macintoshian Achaia    Panther has automatic defragging?
Page 1 2 3 

Closed Topic Closed
Go
New
Find
Notify
Tools
-star Rating Rate It!  Login/Join 
Ars Tribunus Angusticlavius
et Magistratus Fluminis Digitalis

Tribus: Boston, MA
Registered: January 06, 2000
Posts: 19637
Posted   Edit or Delete Message  
In my other thread, someone posted regarding defragging for Panther. This was posted:

http://nslog.com/archives/2003/10/26/qotd_fragmentation.php

...which cites this:

http://article.gmane.org/gmane.comp.macosx.general/22906

quote:
While we're talking about the not so well known/hidden new Panther features: while digging through the kernel sources i've found a quite amusing new feature built into HFS+. Namely automatic file defragmentation.

Everytime an application opens a file for reading, HFS+ checks if the file is fragmented and is less than 20MB in size. If so, it copies the file's contents to a continuous region on the disk and frees up the previously allocated blocks.


What's the story on this? Why wasn't it mentioned earlier?
Ars Scholae Palatinae
et Subscriptor

Tribus: City of Steel
Registered: July 05, 2000
Posts: 2002
Posted   Edit or Delete Message  
God knows why it wasn't mentioned before, but if it's true, that's awesome. Now I don't even need to erase and install.

[This message was edited by Caesar on October 30, 2003 at 18:56.]
Wise, Aged Ars Veteran

Tribus: Atlanta, GA, USA
Registered: September 14, 2003
Posts: 559
AIM: Online Status For FeanorAIM
Posted   Edit or Delete Message  
I've got plenty of files that are fragmented and over 20MB, though.

(And Drive 10 said my disk was very fragmented after heavy use under Panther 7B68, as well)
Wise, Aged Ars Veteran
et Subscriptor

Tribus: vevey, switzerland
Registered: July 30, 2002
Posts: 1106
Posted   Edit or Delete Message  
cool. i'd like some more confirmation, but, cool.
Ars Scholae Palatinae

Tribus: ☄
Registered: May 17, 2001
Posts: 2854
Posted   Edit or Delete Message  
This is neat. Overall, it will reduce file fragmentation making some speed increases on access, but won't reduce filesystem fragmentation (per file versus the entireity of the disc), which is why Drive10 will still say that your disc is heavily fragmented.

At least, that's what my take on it would be.
Ars Praetorian
et Subscriptor

Tribus: NYC
Registered: August 28, 2002
Posts: 2386
AIM: Online Status For Mangrove
Posted   Edit or Delete Message  
quote:
Originally posted by Mr VacBob:
(And Drive 10 said my disk was very fragmented after heavy use under Panther 7B68, as well)


If this story is true, it seems it would on defrag on a file-by-file (as needed) basis.

Perhaps your volume's heavy fragmentation was from files that were not yet accessed under Panther?

If this is the case, the fragmented files shouldn't matter; if they ain't being accessed, leave 'em alone!
Ars Centurion
et Subscriptor

Tribus: Corn Desert (IL)
Registered: March 10, 2002
Posts: 765
Posted   Edit or Delete Message  
quote:
Originally posted by [Fuzzy]:
This is neat. Overall, it will reduce file fragmentation making some speed increases on access, but won't reduce filesystem fragmentation (per file versus the entireity of the disc), which is why Drive10 will still say that your disc is heavily fragmented.


Doesn't OS X try to cache most of the filesystem in RAM anyway?
Wise, Aged Ars Veteran
Registered: March 16, 2001
Posts: 3801
Posted   Edit or Delete Message  
As I posted in hanser's iBook G4 thread, and as fuzzy notes:

I would like to see some official Apple confirmation of this. Also, this doesn't help to optimize the placement of files on the HD. You still don't want your OS, App executables, and free space (read VM) on the slowest part of the drive. At the moment, only Norton Utilities optimizes that.


Does Drive 10 optimize file placement on HDs and optimize the btree catalogue? I was under the impression that it only de-fragmented files (but did not optimize file location).
Smack-Fu Master, in training
Registered: October 14, 2003
Posts: 15
Posted   Edit or Delete Message  
The feature sounds neat, but I'd actually prefer not to have this going on all the time. I'd turn it off.

I'd be perfectly satisfied if Apple simply added a basic, reliable defrag utility and let me run it whenever necessary.
Ars Apple Technology Specialist

Tribus: Newton, MA, US
Registered: February 19, 1999
Posts: 3347
Posted   Edit or Delete Message  
quote:
I would like to see some official Apple confirmation of this.


How's this?

static int
hfs_open(ap)
	struct vop_open_args /* {
		struct vnode *a_vp;
		int  a_mode;
		struct ucred *a_cred;
		struct proc *a_p;
	} */ *ap;
{
	struct vnode *vp = ap->a_vp;
	struct filefork *fp = VTOF(vp);
	struct timeval tv;

	/*
	 * Files marked append-only must be opened for appending.
	 */
	if ((vp->v_type != VDIR) && (VTOC(vp)->c_flags & APPEND) &&
	    (ap->a_mode & (FWRITE | O_APPEND)) == FWRITE)
		return (EPERM);

	if (ap->a_mode & O_EVTONLY) {
		if (vp->v_type == VREG) {
			++VTOF(vp)->ff_evtonly_refs;
		} else {
			++VTOC(vp)->c_evtonly_refs;
		};
	};

	/*
	 * On the first (non-busy) open of a fragmented
	 * file attempt to de-frag it (if its less than 20MB).
	 */
	if ((VTOHFS(vp)->hfs_flags & HFS_READ_ONLY) ||
	    !UBCISVALID(vp) || ubc_isinuse(vp, 1)) {
		return (0);
	}
	fp = VTOF(vp);
	if (fp->ff_blocks &&
	    fp->ff_extents[7].blockCount != 0 &&
	    fp->ff_size <= (20 * 1024 * 1024)) {
		/* 
		 * Wait until system bootup is done (3 min).
		 */
		microuptime(&tv;);
		if (tv.tv_sec < (60 * 3)) {
			return (0);
		}
		(void) hfs_relocate(vp, VTOVCB(vp)->nextAllocation + 4096, ap->a_cred, ap->a_p);
	}

	return (0);
}


/*
 * Relocate a file to a new location on disk
 *  cnode must be locked on entry
 *
 * Relocation occurs by cloning the file's data from its
 * current set of blocks to a new set of blocks. During
 * the relocation all of the blocks (old and new) are
 * owned by the file.
 *
 * -----------------
 * |///////////////|
 * -----------------
 * 0               N (file offset)
 *
 * -----------------     -----------------
 * |///////////////|     |               |     STEP 1 (aquire new blocks)
 * -----------------     -----------------
 * 0               N     N+1             2N
 *
 * -----------------     -----------------
 * |///////////////|     |///////////////|     STEP 2 (clone data)
 * -----------------     -----------------
 * 0               N     N+1             2N
 *
 *                       -----------------
 *                       |///////////////|     STEP 3 (head truncate blocks)
 *                       -----------------
 *                       0               N
 *
 * During steps 2 and 3 page-outs to file offsets less
 * than or equal to N are suspended.
 *
 * During step 3 page-ins to the file get supended.
 */
__private_extern__
int
hfs_relocate(vp, blockHint, cred, p)
	struct  vnode *vp;
	u_int32_t  blockHint;
	struct  ucred *cred;
	struct  proc *p;
{
	struct  filefork *fp;
	struct  hfsmount *hfsmp;
	ExtendedVCB *vcb;

	u_int32_t  headblks;
	u_int32_t  datablks;
	u_int32_t  blksize;
	u_int32_t  realsize;
	u_int32_t  growsize;
	u_int32_t  nextallocsave;
	u_int32_t  sector_a;
	u_int32_t  sector_b;
	int eflags;
	u_int32_t  oldstart;  /* debug only */
	off_t  newbytes;
	int  retval;

	if (vp->v_type != VREG && vp->v_type != VLNK) {
		return (EPERM);
	}
	
	hfsmp = VTOHFS(vp);
	if (hfsmp->hfs_flags & HFS_FRAGMENTED_FREESPACE) {
		return (ENOSPC);
	}

	fp = VTOF(vp);
	if (fp->ff_unallocblocks)
		return (EINVAL);
	vcb = VTOVCB(vp);
	blksize = vcb->blockSize;
	if (blockHint == 0)
		blockHint = vcb->nextAllocation;

	if ((fp->ff_size > (u_int64_t)0x7fffffff) ||
	    (vp->v_type == VLNK && fp->ff_size > blksize)) {
		return (EFBIG);
	}

	headblks = fp->ff_blocks;
	datablks = howmany(fp->ff_size, blksize);
	growsize = datablks * blksize;
	realsize = fp->ff_size;
	eflags = kEFContigMask | kEFAllMask | kEFNoClumpMask;
	if (blockHint >= hfsmp->hfs_metazone_start &&
	    blockHint <= hfsmp->hfs_metazone_end)
		eflags |= kEFMetadataMask;

	hfs_global_shared_lock_acquire(hfsmp);
	if (hfsmp->jnl) {
		if (journal_start_transaction(hfsmp->jnl) != 0) {
			return (EINVAL);
		}
	}

	/* Lock extents b-tree (also protects volume bitmap) */
	retval = hfs_metafilelocking(hfsmp, kHFSExtentsFileID, LK_EXCLUSIVE, p);
	if (retval)
		goto out2;

	retval = MapFileBlockC(vcb, (FCB *)fp, 1, growsize - 1, §or;_a, NULL);
	if (retval) {
		retval = MacToVFSError(retval);
		goto out;
	}

	/*
	 * STEP 1 - aquire new allocation blocks.
	 */
	nextallocsave = vcb->nextAllocation;
	retval = ExtendFileC(vcb, (FCB*)fp, growsize, blockHint, eflags, &newbytes;);
	if (eflags & kEFMetadataMask)                   
		vcb->nextAllocation = nextallocsave;

	retval = MacToVFSError(retval);
	if (retval == 0) {
		VTOC(vp)->c_flag |= C_MODIFIED;
		if (newbytes < growsize) {
			retval = ENOSPC;
			goto restore;
		} else if (fp->ff_blocks < (headblks + datablks)) {
			printf("hfs_relocate: allocation failed");
			retval = ENOSPC;
			goto restore;
		}

		retval = MapFileBlockC(vcb, (FCB *)fp, 1, growsize, §or;_b, NULL);
		if (retval) {
			retval = MacToVFSError(retval);
		} else if ((sector_a + 1) == sector_b) {
			retval = ENOSPC;
			goto restore;
		} else if ((eflags & kEFMetadataMask) &&
		           ((((u_int64_t)sector_b * hfsmp->hfs_phys_block_size) / blksize) >
		              hfsmp->hfs_metazone_end)) {
			printf("hfs_relocate: didn't move into metadata zone\n");
			retval = ENOSPC;
			goto restore;
		}
	}
	if (retval) {
		/*
		 * Check to see if failure is due to excessive fragmentation.
		 */
		if (retval == ENOSPC &&
		    hfs_freeblks(hfsmp, 0) > (datablks * 2)) {
			hfsmp->hfs_flags |= HFS_FRAGMENTED_FREESPACE;
		}
		goto out;
	}

	fp->ff_size = fp->ff_blocks * blksize;
	if (UBCISVALID(vp))
		(void) ubc_setsize(vp, fp->ff_size);

	/*
	 * STEP 2 - clone data into the new allocation blocks.
	 */

	if (vp->v_type == VLNK)
		retval = hfs_clonelink(vp, blksize, cred, p);
	else if (vp->v_flag & VSYSTEM)
		retval = hfs_clonesysfile(vp, headblks, datablks, blksize, cred, p);
	else
		retval = hfs_clonefile(vp, headblks, datablks, blksize, cred, p);

	if (retval)
		goto restore;
	
	oldstart = fp->ff_extents[0].startBlock;

	/*
	 * STEP 3 - switch to clone and remove old blocks.
	 */
	SET(VTOC(vp)->c_flag, C_NOBLKMAP);   /* suspend page-ins */

	retval = HeadTruncateFile(vcb, (FCB*)fp, headblks);

	CLR(VTOC(vp)->c_flag, C_NOBLKMAP);   /* resume page-ins */
	if (ISSET(VTOC(vp)->c_flag, C_WBLKMAP))
		wakeup(VTOC(vp));
	if (retval)
		goto restore;

	fp->ff_size = realsize;
	if (UBCISVALID(vp)) {
		(void) ubc_setsize(vp, realsize);
		(void) vinvalbuf(vp, V_SAVE, cred, p, 0, 0);
	}

	CLR(VTOC(vp)->c_flag, C_RELOCATING);  /* Resume page-outs for this file. */
out:
	(void) hfs_metafilelocking(VTOHFS(vp), kHFSExtentsFileID, LK_RELEASE, p);

	retval = VOP_FSYNC(vp, cred, MNT_WAIT, p);
out2:
	if (hfsmp->jnl) {
		if (VTOC(vp)->c_cnid < kHFSFirstUserCatalogNodeID)
			(void) hfs_flushvolumeheader(hfsmp, MNT_WAIT, HFS_ALTFLUSH);
		else
			(void) hfs_flushvolumeheader(hfsmp, MNT_NOWAIT, 0);
		journal_end_transaction(hfsmp->jnl);
	}
	hfs_global_shared_lock_release(hfsmp);

	return (retval);

restore:
	/*
	 * Give back any newly allocated space.
	 */
	if (fp->ff_size != realsize)
		fp->ff_size = realsize;
	(void) TruncateFileC(vcb, (FCB*)fp, fp->ff_size, false);
	if (UBCISVALID(vp))
		(void) ubc_setsize(vp, fp->ff_size);
	CLR(VTOC(vp)->c_flag, C_RELOCATING);
	goto out;
}
Ars Tribunus Militum

Tribus: the subtropical gardens of the Morlocks
Registered: April 06, 2001
Posts: 7953
AIM: Online Status For gunbear2k
Posted   Edit or Delete Message  
quote:
Originally posted by King Friday:
The feature sounds neat, but I'd actually prefer not to have this going on all the time. I'd turn it off.


Why?

quote:

I'd be perfectly satisfied if Apple simply added a basic, reliable defrag utility and let me run it whenever necessary.


Well, that's a much, much, harder 'app' to write than what they've done here, since the 'app' has to keep a map of files, fragmented, files, and spaces, and somehow create an optimized 'map' containing as many unfragmented files and as few open spaces as possible, all while doing it as quickly as possible...

Which isn't easy ^^
Ars Tribunus Militum

"Ars Oversleeping Specialist"

Tribus: Mount Real
Registered: May 07, 2000
Posts: 9773
Posted   Edit or Delete Message  
can someone translate what John said into English?
Ars Praetorian

Tribus: ✌
Registered: August 22, 2002
Posts: 2373
Posted   Edit or Delete Message  
quote:
Originally posted by BEIGE:
can someone translate what John said into English?

Yeah, I think he said "Here is your confirmation TigerKR. Please have a good day now." I know the language but the dialect is foreign to me. Wink
Ars Scholae Palatinae

Tribus: I am not going to belittle Fluffy's suffering
Registered: August 31, 2001
Posts: 4944
Posted   Edit or Delete Message  
/*
 * Relocate a file to a new location on disk
 *  cnode must be locked on entry
 *
 * Relocation occurs by cloning the file's data from its
 * current set of blocks to a new set of blocks. During
 * the relocation all of the blocks (old and new) are
 * owned by the file.
 *
 * -----------------
 * |///////////////|
 * -----------------
 * 0               N (file offset)
 *
 * -----------------     `´`´`´`´`´`´`´`´`
 * |///////////////|     }    whirr...   {     STEP 1 (aquire new blocks)
 * -----------------     `´`´`´`´`´`´`´`´`
 * 0               N     N+1             2N
 *
 * -----------------     -----------------
 * |       ////////| ===}|///////        |     STEP 2 (clone data)
 * -----------------     -----------------
 * 0               N shhhwip!            2N
 *
 *                       -----------------
 *     :>POOF!<:         |////*gleam*////|     STEP 3 (head truncate blocks)
 *                       -----------------
 *                       0               N
 *
 * During steps 2 and 3 page-outs to file offsets less
 * than or equal to N are suspended.
 *
 * During step 3 page-ins to the file get supended.
 */

Leave it to Apple to draw diagrams in their comment blocks. (Okay, I may have embellished a bit for Aqua effect.)

And I'll take that file-block-moving diagram as confirmation -- if John will tell us where he found it... Smile
Smack-Fu Master, in training
Registered: August 30, 2003
Posts: 13
Posted   Edit or Delete Message  
FYI. Its called "Hot-File-Adaptive-Clustering". :-)
Ars Praetorian

Tribus: Huntsville, AL, USA, Terra, Sol
Registered: August 19, 2001
Posts: 1093
Posted   Edit or Delete Message  
quote:
Originally posted by iconmaster:
And I'll take that file-block-moving diagram as confirmation -- if John will tell us where he found it...


Presumably from darwin kernel source..

quote:
Originally posted by xrules:
FYI. Its called "Hot-File-Adaptive-Clustering". :-)


Hot File-on-File action!
Ars Tribunus Angusticlavius
et Subscriptor

Tribus: CB4
Registered: December 16, 1999
Posts: 5363
Posted   Edit or Delete Message  
I wanna program with sound effects. We know some of the programmers are having fun.
Ars Scholae Palatinae
et Subscriptor

Tribus: Kingdom of the Four
Registered: November 20, 2001
Posts: 2803
Posted   Edit or Delete Message  
I could see this being the root of the file errors on External Firewire Hard Drives. If there was a bug in this code, the cloning data to a new region would result in a bunch of gibberish and data loss. Frown
Ars Apple Technology Specialist

Tribus: Newton, MA, US
Registered: February 19, 1999
Posts: 3347
Posted   Edit or Delete Message  
quote:
I'll take that file-block-moving diagram as confirmation -- if John will tell us where he found it...


That was from the Darwin 7.0 source code.
Wise, Aged Ars Veteran

Tribus: Naperville, IL
Registered: May 18, 2002
Posts: 498
Posted   Edit or Delete Message  
TigerKR,

Does the latest Norton Utilities (specifically Disk Doctor & Speed Disk) work with Panther? I'm talking about the Boot CD.

I'm too scared to try it out. Wink
RW
Ars Scholae Palatinae

Tribus: Left Coast
Registered: August 19, 2001
Posts: 1565
Posted   Edit or Delete Message  
uh, I gotta ask, does it really say 'shhhwip' and Poof! in the comments?

If so I need to really review my coding style guide! Clearly I'm not having enough fun..
Ars Scholae Palatinae

Tribus: I am not going to belittle Fluffy's suffering
Registered: August 31, 2001
Posts: 4944
Posted   Edit or Delete Message  
quote:
Originally posted by stephenb:
I wanna program with sound effects. We know some of the programmers are having fun.

quote:
Originally posted by storme:
uh, I gotta ask, does it really say 'shhhwip' and Poof! in the comments?

Sorry guys -- "embellished a bit" as it says above. Let's not start any rumors about the Apple developers turning their code into ASCII comic strips; we'd never hear the end of it in the Battlefront. Big Grin
Ars Tribunus Militum

Tribus: Cirque du Los Angeles
Registered: February 26, 2001
Posts: 12240
Posted   Edit or Delete Message  
I'm very happy to see this.

Question- how does this bode for more extensive chron jobs run during the off hours? Shouldn't I be able to schedule a more extensive defragging at 4 am?


Second question- What count's as a fragmented file? I don't really worry about a file that's in two or three pieces. It's the system files that end up in 30 or 40 pieces that are a real problem, and I'm not really sure I want the OS defragging that file when I'm opening it ('cause that's when I need it).

I'd much rather have the OS keeping track of which files are fragmented (maybe flagging the worst offenders/prioritizing) and then defragging them when the system is idle.

Don't get me wrong, I'm very happy to see this autodefrag implementation, I'm just a little worried how it might impact performance, especially on a full HD. I suppose any performance hit will largely be compensated for by performance gains once this system's run for a while, hmm?

[This message was edited by Ashby on October 29, 2003 at 11:57.]
Ars Scholae Palatinae

Tribus: I am not going to belittle Fluffy's suffering
Registered: August 31, 2001
Posts: 4944
Posted   Edit or Delete Message  
I guess you could just write a script that opens a bunch of your documents, then closes everything down a few minutes later. Maybe on Monday nights, everything in /Documents gets opened (and defragged by the system), /Music on Tuesday, /Pictures on Wednesday, etc..
Ars Praetorian
et Subscriptor

Tribus: Trafalmadore
Registered: April 22, 2002
Posts: 1261
AIM: Online Status For sacrossman
Posted   Edit or Delete Message  
This must be true, as I booted up on my Norton 6 disk and using SpeedDisk I have only 100 fragmented files out of 150,000. This is also after multiple Panther beta installs too.
Wise, Aged Ars Veteran
Registered: March 16, 2001
Posts: 3801
Posted   Edit or Delete Message  
quote:
Originally posted by Rollins:
Does the latest Norton Utilities (specifically Disk Doctor & Speed Disk) work with Panther? I'm talking about the Boot CD.
To be honest, my family pack is non-opened (and of course non-installed) at the moment because I'm getting a new business off of the ground. However, Symantec has this to say:
quote:
Symantec 2003-2004 product compatibility with Mac OS X 10.3 (code named Panther)

Situation:
This document outlines compatibility between Symantec 2003-2004 products and Mac OS X 10.3 (code named Panther).

Solution:
Symantec Macintosh software is not yet fully compatible with Apple's new operating system Mac OS X 10.3 (code named Panther), released October 24, 2003. Symantec is currently working on updates for compatibility with Mac OS X 10.3. This document will be updated when additional information is available. Bookmark this page and check back for news.

If you are running Mac OS X 10.3 and a Symantec 2003-2004 Macintosh product, refer to the section for your product below for compatibility information.

...

Norton SystemWorks for Macintosh 3.0.x

In Mac OS X 10.3 (code named Panther), no disks appear in the Norton Disk Doctor, UnErase, Volume Recover or Speed Disk window. Norton FileSaver runs in the background at startup, but no scheduled FileSaver events run.

To use Norton Utilities on a Macintosh with Mac OS X 10.3 installed, you must start up from the Norton Utilities or Norton SystemWorks CD, a second partition, or an external hard drive with Mac OS X 10.1.5 -10.2.8 and Norton Utilities installed. Another solution, if you have a second Macintosh with FireWire, is to use FireWire target disk mode to examine the hard disk on your Macintosh with Mac OS X 10.3 installed. Read the document How to use FireWire target disk mode to run Norton Utilities from a second computer.

Norton AntiVirus Auto-Protect works in Mac OS X 10.3 to protect your computer from viruses. You may also scan disks with Norton AntiVirus for Macintosh in Mac OS X 10.3. When "fast user switching" is enabled, Norton AntiVirus alerts appear only for the user that first logged in. If another user is active when the alert occurs, the alert does not appear for that user.

You must run LiveUpdate manually in Mac OS X 10.3. Norton Scheduler does not work in Mac OS X 10.3, so scheduled events for Norton AntiVirus, Norton FileSaver, Speed Disk or LiveUpdate do not run.

Symantec is currently working on updates for compatibility with Mac OS X 10.3 and this document will be updated when one is available.
Long story short, you have to boot from the Norton CD or another non-Panther partition/drive in order to use Norton (which you would do for repairs and optimization anyway), and in that scenario, it will work.
quote:
Originally posted by iconmaster:
I guess you could just write a script that opens a bunch of your documents, then closes everything down a few minutes later. Maybe on Monday nights, everything in /Documents gets opened (and defragged by the system), /Music on Tuesday, /Pictures on Wednesday, etc..
Very creative idea, I wouldn't have thought of that Smile - unfortunately you'd want to restart after that, because your system would likely start paging like a mofo. Also, opening documents and saving them could introduce problems with save and replace dialogue boxes and possible application errors.
Ars Apple Technology Specialist

Tribus: Newton, MA, US
Registered: February 19, 1999
Posts: 3347
Posted   Edit or Delete Message  
quote:
What count's as a fragmented file?


As far as the auto-defrag code in conserned, this:

if (fp->ff_blocks &&
    fp->ff_extents[7].blockCount != 0 &&
    fp->ff_size <= (20 * 1024 * 1024))


which roughly means "files less than 20MB that have a non-zero block count in the last item of their list of extents structures." You'll have to chase all those structs through the code to find out if that necessarily indicates fragmentation or is just a best-guess.
Ars Praetorian
et Subscriptor

Tribus: Trafalmadore
Registered: April 22, 2002
Posts: 1261
AIM: Online Status For sacrossman
Posted   Edit or Delete Message  
quote:
Originally posted by Rollins:
TigerKR,

Does the latest Norton Utilities (specifically Disk Doctor & Speed Disk) work with Panther? I'm talking about the Boot CD.

I'm too scared to try it out. Wink
RW


I was brave and used Speed Disk v6 from the Norton Startup CD which had 9.04 to optimize my Panther install. I have never had problems with SD, but I won't touch Disk Doctor any longer.
Ars Scholae Palatinae

Tribus: I am not going to belittle Fluffy's suffering
Registered: August 31, 2001
Posts: 4944
Posted   Edit or Delete Message  
quote:
Originally posted by TigerKR:
Very creative idea, I wouldn't have thought of that Smile

Thanks. Smile IANAP, but I like problem-solving.
quote:
opening documents and saving them could introduce problems with save and replace dialogue boxes and possible application errors.

Doesn't the defragmentation happen on opening the file? As I understand it, saving wouldn't be necessary -- just a lot of opening and closing.
quote:
unfortunately you'd want to restart after that, because your system would likely start paging

Agreed. Perhaps the script could end with a restart command; or you could turn on Panther's scheduled shutdown feature, timed sufficiently later than the script.
Wise, Aged Ars Veteran

Tribus: Fremont, CA
Registered: October 24, 2001
Posts: 225
Posted   Edit or Delete Message  
quote:
Originally posted by iconmaster:
...
quote:
unfortunately you'd want to restart after that, because your system would likely start paging

Agreed. Perhaps the script could end with a restart command; or you could turn on Panther's scheduled shutdown feature, timed sufficiently later than the script.

Why is this necessary? So, it starts paging in and out in the middle of the night. It'll be done by morning, and the memory will be freed in due time. My concern is not so much about uptime (Smile) but about having to close stuff down and then starting it back up after login.

-Ster
Wise, Aged Ars Veteran

Tribus: Boise, ID, United States
Registered: August 10, 2003
Posts: 678
AIM: Online Status For kingofpeppers
Posted   Edit or Delete Message  
quote:
Originally posted by The Limey:
God knows why it wasn't mentioned before, but if it's true, that's awesome. Now I don't even need to erase and install.


Word to that my friend.

and Mr VacBob: I erased and isntalled panther on the iBook I sold, and Drive 10 v1.1.4 said it was mucho fragmented.

Funny that.
Wise, Aged Ars Veteran
Registered: December 06, 2000
Posts: 265
Posted   Edit or Delete Message  
That could be the most beautiful and best documented code I have ever seen.
Wise, Aged Ars Veteran

Tribus: Toronto, Ontario, Canada
Registered: June 01, 2001
Posts: 343
MSN does not support status - click here for the profile.
Posted   Edit or Delete Message  
Has anyone verified that this is enabled by default?
Ars Apple Technology Specialist

Tribus: Newton, MA, US
Registered: February 19, 1999
Posts: 3347
Posted   Edit or Delete Message  
quote:
Has anyone verified that this is enabled by default?


I don't see how it could be disabled.
Wise, Aged Ars Veteran

Tribus: Fremont, CA
Registered: October 24, 2001
Posts: 225
Posted   Edit or Delete Message  
John,

Is that code executed when the file is actually read, or when the file is just opened? I was thinking that it would be fairly straightforward to `find / -type f` to list all the files, and pipe that to a program (or script?) that would just call `open()` and then `close()` it back again. If it defrags on `open()`, then that would end up defragging all the eligible files, w/o the overhead of having to launch the proper program or deal with saving it and all that stuff.

-Ster
Smack-Fu Master, in training
Registered: October 25, 2003
Posts: 24
Posted   Edit or Delete Message  
As far as I can tell from reading the code:

HFS+ references files using "extents". An extent is a data structure in the HFS catalog file (the file that keeps track of all the other files in the system). A given extent stores a start location, and a length - essentially, it tracks a single contiguous chunk of bytes on disk. As a file becomes fragmented, it uses more and more extents to keep track of it (i.e. a 100 byte file might be tracked in a 1 extent of 20 bytes, 3 extents of 10 bytes each, and 1 extent of 50 bytes). Up to 8 extents can be tracked directly in the catalog tree. If a file has more than 8 extents, they are tracked in another file - the Extents Overflow File. Access to a file that is tracked in both the Catalog and the Extents Overflow File is slow - both because it implies that the file is (relatively) highly fragmented, and because the Extents Overflow File probably isn't as "hot" as the Catalog file (not as likely to be cached). In general, access to a fragmented file is slow.

What this code is doing is: When a file is opened, OSX will look at it. If the file has any data, if the file is less than 20MB in size, and if the file has 8 or more extents, it will be defragmented. Note that the defragmentation won't happen if the disk has very little contiguous freespace left, since you don't have the space to defragment.

I'd guess that the parameters (20MB and 8+ extents) were gathered through some sort of research or testing. It seems high for my taste, but I'll trust the engineers (particularly since Panther seems pretty fast Smile. My guess as to why this happens _all_ the time: Willingness to pay an up front cost for speed in subsequent operations. There might also be the possibility that most of the time an open is followed by a read, so defragging might not do much more than preheat the file-cache. I'm not sure what "non-busy" means in the code, I'll have to do more digging. One nice thing about this scheme is that you don't spend time moving around files that no-one cares about. If you don't use it, it doesn't get defragged.

The rest of the hotfile mechanism is interesting, and hasn't been discussed yet - it looks like over a period of time, OSX keeps track of which files have been use the most and moves them to the "hot band". This is already a bonus, since it'll defrag the files during the move, but I'm not sure what the "hot band" is (probably the fastest area on the physical hard disk, but I'm not sure)... Looks like this all happens in the background over several days. It'd be pretty cool if Panther got faster with extended use Smile
Ars Tribunus Militum

Tribus: Cirque du Los Angeles
Registered: February 26, 2001
Posts: 12240
Posted   Edit or Delete Message  
quote:
Why is this necessary? So, it starts paging in and out in the middle of the night. It'll be done by morning, and the memory will be freed in due time. My concern is not so much about uptime () but about having to close stuff down and then starting it back up after login.



The problem with paging with this sort of routine is that you may page out all sorts of things that you actually want in memory in exchange for a bunch of files that you may or may not want in memory. Ideal might be to assign some sort of memory throttle for this routine e.g. Photoshop allows you to set a maximum % of available RAM to be used for the program. That allows you to prevent Photoshop from eating all your RAM.

If you could run a chron job background defragger and limit the process to 50% of available free RAM, that would prevent it from displacing stuff you are actually using.

Now, I'm not a programmer, but I would hope that OSX is getting smarter about what it pages out e.g. sets priorities for what code is less speed critical and pages that out. Has anyone seen any signs of this? My understanding is that OSX used to page out based on 'longest time since code has been used'. The problem with that is you may have some huge recent app or a memory leak that squeezes all sorts of speed critical code out to disk. Unless the file system prioritizes system files etc. over those three huge PDFs you recently opened but forgot to close, you can really slow down your performance.

These days, about the only reason I restart my box (aside from system updates) is because I've gone into swap. The computer simply never seems as fast once it's begun swapping. I'm convinced that's because the computer doesn't properly prioritize what to swap and ends up paging speed critical code back up.

-

Discord-

Thanks for the analysis. Sounds like the program may be doing exactly what I've been hoping for- intelligent defragging and strategic file placement. Yay!
Wise, Aged Ars Veteran
Registered: March 16, 2001
Posts: 3801
Posted   Edit or Delete Message  
quote:
Originally posted by iconmaster:
Doesn't the defragmentation happen on opening the file? As I understand it, saving wouldn't be necessary -- just a lot of opening and closing.
I'm not sure about that. I'm not sure how Apple implemented it.

I would think that defragging on open would defrag more files, however, it would allow for more possibility for errors because you're altering the file without (the user) making changes (when normally, a file's magnetic state wouldn't be altered until changes were made - by the user).

EDIT: I guess discord answered that question. Files are defragged on open, but only if they're small and really, really fragmented. So further optimization is possible.
quote:
Originally posted by discord:
The rest of the hotfile mechanism is interesting, and hasn't been discussed yet - it looks like over a period of time, OSX keeps track of which files have been use the most and moves them to the "hot band". This is already a bonus, since it'll defrag the files during the move, but I'm not sure what the "hot band" is (probably the fastest area on the physical hard disk, but I'm not sure)... Looks like this all happens in the background over several days. It'd be pretty cool if Panther got faster with extended use Smile
I actually don't like this idea (except when VM is on an exclusive partition). I think that unused files should be moved onto the slowest part of the drive as opposed to moving often-used files to the fast part of the drive. Why? Because VM is used more than basically any file, and therefore performance of the system would be better if VM was always on the fastest part of the drive - and that can't happen if OS X is constantly piling files onto the fastest part of the drive.

Having said that, I have a separate partitions for VM, Scratch, OS+Apps, Files (non-AV), AV Files, and Installers + Updaters. So I would benefit greatly from the scheme described by discord. But my setup isn't typical. Then again, most users don't care.

Off Topic: I would just like to point out that new posters like discord are the reason that the Ach should not become subscriber only. discord, welcome to Ars Technica! (And welcome to all the other MacSlashers visiting!)

[This message was edited by TigerKR on October 30, 2003 at 02:41.]
Ars Scholae Palatinae

Tribus: I am not going to belittle Fluffy's suffering
Registered: August 31, 2001
Posts: 4944
Posted   Edit or Delete Message  
quote:
Originally posted by discord:
It'd be pretty cool if Panther got faster with extended use Smile

That's what I'm hoping too. (Rock on, Apple Computer.) Thanks for the great analysis, discord.

TigerKR: I think it's pretty clear from the comments:

         /*
         * On the first (non-busy) open of a fragmented
         * file attempt to de-frag it (if its less than 20MB).
         */


Edit: I see you were persuaded. But I'll leave my supporting evidence in 'cause it makes me look like I understand this stuff.
Ars Praetorian

Tribus: Paris, France
Registered: April 14, 2001
Posts: 968
Posted   Edit or Delete Message  
Are there any other hidden jewels in the Darwin 7 source code? I'm really wondering why Apple wouldn't document such a useful feature...
 Previous Topic | Next Topic powered by eve community Page 1 2 3  

Closed Topic Closed

Ars Technica    Ars OpenForum 3.0b  Hop To Forum Categories  Macintoshian Achaia    Panther has automatic defragging?

© Ars Technica, LLC 1998-2006.


Looking for more chatting insanity? Visit the Ars OpenIRC Server!