Logic in Disk Defragmantation & Disk Check?

For completeness sake, here's a C# API wrapper for defragmentation: blogs.msdn.com/jeffrey%5Fwall/archive/20... Defragmentation with these APIs is (supposed to be) very safe nowadays. You shouldn't be able to corrupt the file system even if you wanted to Commercial defragmentation programs use the same APIs.

For completeness sake, here's a C# API wrapper for defragmentation: blogs.msdn.com/jeffrey%5Fwall/archive/20... Defragmentation with these APIs is (supposed to be) very safe nowadays. You shouldn't be able to corrupt the file system even if you wanted to. Commercial defragmentation programs use the same APIs.

The difference being that the commercial companies know about file systems, and the OP couldn't even define defragmentation. I'd want him to know what a trigger is before I give him a gun. – John Saunders Aug 21 '09 at 0:29.

Look at Defragmenting Files at msdn for possible API helpers. You should carefully think about using C# for this task, as it may introduce some undesired overhead for marshaling into native Win32.

Mark Russinovich wrote an article Inside Windows NT Disk Defragmentation a while ago which gives in-depth details. If you really want to do this I would really advise you to use the built-in facilities for defragmenting. More so, on recent OSes I have never seen a need as a user to even care about defragmenting; it will be done automatically on a schedule and the NTFS folks at MS are definitely smarter at that stuff than you (sorry, but they do this for some time now, you don't).

This article is very much helpful. – Sauron Jul 6 '09 at 9:20.

Despite its importance, the file system is no more than a data structure that maps file names into lists of disk blocks. And keeps track of meta-information such as the actual length of the file and special files that keep lists of files (e.g. , directories). A disk checker verifies that the data structure is consistent.

That is, every disk block must either be free for allocation to a file or belong to a single file. It can also check for certain cases where a set of disk blocks appears to be a file that should be in a directory but is not for some reason. Defragmentation is about looking at the lists of disk blocks assigned to each file.

Files will generally load faster if they use a contiguous set of blocks rather than ones scattered all over the disk. And generally the entire file system will perform best if all the disk blocks in use confine themselves to a single congtiguous range of the disk. Thus the trick is moving disk blocks around safely to achieve this end while not destroying the file system.

The major difficulty here is running these application while a disk is in use. It is possible but one has to be very, very, very careful not to make some kind of obvious or extremely subtle error and destroy most or all of the files.It is easier to work on a file system offline. The other difficulty is dealing with the complexities of the file system.

For example, you'd be much better off building something that supports FAT32 rather than NTFS because the former is a much, much simpler file system.As long as you have low-level block access and some sensible way for dealing with concurrency problems (best handled by working on the file system when it is not in use) you can do this in C#, perl or any language you like. BUT BE VERY CAREFUL. Early versions of the program will destroy entire file systems.

Later versions will do so but only under obscure circumstances. And users get extremely angry and litigious if you destroy their data.

Thanks Philips, This may give me a solution. – Sauron Jul 6 '09 at 9:20 There are Windows APIs for defragmentation which are completely safe, and work on the NTFS file system. – legenden Aug 20 '09 at 23:23.

Despite its importance, the file system is no more than a data structure that maps file names into lists of disk blocks. And keeps track of meta-information such as the actual length of the file and special files that keep lists of files (e.g. A disk checker verifies that the data structure is consistent. That is, every disk block must either be free for allocation to a file or belong to a single file.

It can also check for certain cases where a set of disk blocks appears to be a file that should be in a directory but is not for some reason. Defragmentation is about looking at the lists of disk blocks assigned to each file. Files will generally load faster if they use a contiguous set of blocks rather than ones scattered all over the disk.

And generally the entire file system will perform best if all the disk blocks in use confine themselves to a single congtiguous range of the disk. Thus the trick is moving disk blocks around safely to achieve this end while not destroying the file system. The major difficulty here is running these application while a disk is in use.

It is possible but one has to be very, very, very careful not to make some kind of obvious or extremely subtle error and destroy most or all of the files. It is easier to work on a file system offline.

I cant really gove you an answer,but what I can give you is a way to a solution, that is you have to find the anglde that you relate to or peaks your interest. A good paper is one that people get drawn into because it reaches them ln some way.As for me WW11 to me, I think of the holocaust and the effect it had on the survivors, their families and those who stood by and did nothing until it was too late.

Related Questions