je to tu pekne popisane staci si to len prelozit, zaujimave je to od 6 odstavca, zalezi od typu flasky ci fragmentacia sposobuje spomalenie citania alebo nie
Kód: Vybrat vše
In a recent series of articles about the real impact of fragmentation in today's storage and operating systems, I concluded that while defragmenting was still useful, it had diminishing returns if used to excess. For instance, defragmenting more than once a week doesn't yield more than the most negligible benefits. . .unless you're deleting and adding a lot of files.
After reading the articles, someone emailed me to ask, "Do flash memory storage devices need to be defragmented?" At first I answered, "Probably not," but after some investigation, I came up with some justifications for defragging a flash memory device.
The big reason fragmentation has a harmful effect on hard disk drives is because it forces the drive to do more physical work to retrieve the same amount of data. The read/write heads have to move back and forth that much more, and the system sometimes has to wait on the drive platters to spin all the more, which incurs a cumulative performance penalty.
In short, the reason fragmentation causes perceptible performance problems is because drives have moving parts; they're not solid-state units, and they can't respond equally fast to every request for data.
On the other hand, flash memory devices have no moving parts. It takes an equally long time to retrieve any one byte of data as it does any other—or, if there is a delay, it's not something that is cumulatively measurable or perceptible to the end user. If a file gets fragmented on a flash memory device, it takes no measurably greater amount of time to retrieve it than if it is contiguous.
However, there are some flash memory devices that have very good sequential read performance, but very poor random read performance. This is not consistent across all flash memory devices, and it's probably a reflection of the way some flash memory is engineered. The way this came to light was through discussion of the ReadyBoost feature in Windows Vista, which allows a user to designate a flash memory device as swap space—provided the device is consistently fast.
Some flash memory devices use one block of very fast flash memory, but the rest of the device is composed of slower memory. Vista will report how much of the memory on the device is suitable for ReadyBoost; if it says some of it is too slow, that's a sign you have a device with mixed memory speeds. If such a device were defragmented, it might mean that blocks of data were being moved from slower memory into faster memory, which would explain a speed-up. But again, not all flash memory devices are engineered like this, so it's not a guideline for how they all might behave, and not a reason to recommend defragmentation unilaterally.
Then there's the question of what "contiguous" means on a flash memory device. Most flash memory devices also use wear-leveling strategies, which places an additional layer of abstraction between the data and how it's organized. This is done to keep the number of read/write cycles for any given block of memory from being prematurely exhausted.
This is why talking about a given file as "fragmented" on a flash memory drive is essentially meaningless; it could be stored by default in a number of blocks that are entirely disparate, and you'd never know. An argument could be made that the wear-leveling management mechanisms in a flash memory drive could, over time, create a kind of fragmentation effect. But again, the total bottleneck that such a thing would cause is probably too minimal to be measured or perceived.
According to an expert I talked to about this issue, another possible mechanism that might explain why a defragmented flash memory drive would run slightly faster than one that hasn't been fragmented is the total number of I/O operations required to retrieve a given set of data. A fragmented file requires more discrete I/O operations to fetch, so retrieving a number of fragmented files from such a device would probably accumulate a bit more I/O overhead than retrieving files that were contiguous. That said, without hard numbers to back this up, I have a hard time believing that the total I/O overhead in today's computers would create a cumulative delay that would be big enough to notice.
Most of the data I have seen to support defragmenting flash memory has been anecdotal and not based on hard numbers—i.e., someone reported that a flash memory drive was slow, defragmented it, and then found it to be running much faster, without any useful information about what other factors might have changed. As before, if the drive is slowing down, that may be a hint that you have a flash memory drive that uses a mixture of fast and slow memory–and you may simply want to look into replacing with a drive that isn't engineered that way.
In short, defragmenting flash memory is probably not worth it unless you can demonstrate that there is a perceptible speed improvement by doing so. The key word is perceptible, and unless you are using measurable and testable metrics for judging such a thing, you may not be witnessing anything other than subjective bias about how fast such things should be.