What is the difference between ecc memory and normal memory. Explain what is “ECC Support” on RAM

Questions

What memory limits are imposed by modern operating systems of the Windows family?

Obsolete, but still found in some places, operational Windows systems 9x/ME can only work with 512 MB of memory. And although large-volume configurations are quite possible for them, this causes much more problems than benefits. Modern 32-bit Windows versions 2000/2003/XP and Vista theoretically support up to 4 GB of memory, but no more than 2 GB is actually available for applications. With a few exceptions, the entry-level OS Windows XP Starter Edition and Windows Vista Starter can work with no more than 256 MB and 1 GB of memory, respectively. The maximum supported size of 64-bit Windows Vista varies by version and is:

  • Home Basic - 8 GB;
  • Home Premium - 16 GB;
  • Ultimate - Over 128 GB;
  • Business - More than 128 GB;
  • Enterprise - More than 128 GB.

What is DDR SDRAM?

Memory type DDR(Double Data Rate - double data transfer rate) provides data transfer over the memory-chipset bus twice per clock, on both fronts of the clock signal. Thus, when the system bus and memory are running at the same clock frequency, throughput the memory bus is twice as large as that of conventional SDRAM.

Two parameters are usually used in the designation of DDR memory modules: either the operating frequency (equal to twice the value clock frequency) - for example, the clock frequency of the DR-400 memory is 200 MHz; or peak throughput (in Mb/s). The same DR-400 has a bandwidth of approximately 3200 Mb / s, so it can be referred to as PC3200. At present, DDR memory has lost its relevance and in new systems it has been almost completely superseded by the more modern DDR2. however, in order to keep afloat a large number of older computers that have DDR memory installed, it is still being released. The most common 184-pin DDR modules are PC3200 and, to a lesser extent, PC2700. DDR SDRAM may have Registered and ECC variants.

What is DDR2 memory?

DDR2 memory is the successor to DDR and is currently the dominant memory type for desktops, servers, and workstations. DDR2 is designed to operate at more than high frequencies, than DDR, is characterized by lower power consumption, as well as a set of new features (prefetch 4 bits per clock, built-in termination). In addition, unlike DDR chips, which were produced in both TSOP and FBGA packages, DDR2 chips are only available in FBGA packages (which provides them with greater stability at high frequencies). DDR and DDR2 memory modules are not only electrically and mechanically compatible with each other: 240-pin brackets are used for DDR2, while 184-pin brackets are used for DDR. Today, the most common memory operating at a frequency of 333 MHz and 400 MHz, and referred to as DDR2-667 (PC2-5400/5300) and DDR2-800 (PC2-6400), respectively.

What is DDR3 memory?

Answer: The third generation DDR memory - DDR3 SDRAM should soon replace the current DDR2. The performance of the new memory has doubled compared to the previous one: now each read or write operation means access to eight groups of DDR3 DRAM data, which, in turn, using two different reference oscillators, are multiplexed over the I / O pins at a frequency of four times the clock frequency. Theoretically, effective DDR3 frequencies will be in the range of 800 MHz - 1600 MHz (at clock frequencies of 400 MHz - 800 MHz), thus, the marking of DDR3 depending on the speed will be: DDR3-800, DDR3-1066, DDR3-1333, DDR3-1600 . Among the main advantages of the new standard, first of all, it is worth noting significantly lower power consumption (supply voltage DDR3 - 1.5 V, DDR2 - 1.8 V, DDR - 2.5 V).

What is SLI-Ready Memory?

Answer: SLI-Ready-memory, in other words - memory with EPP (Enhanced Performance Profiles - profiles for increasing performance), was created by the marketing departments of NVIDIA and Corsair. EPP profiles, in which, in addition to the standard memory timings, the value of the optimal supply voltage of the modules, as well as some Extra options, are written to the SPD chip of the module.

Thanks to EPP profiles, the complexity of self-optimization of the memory subsystem operation is reduced, although "additional" timings do not have a significant impact on system performance. So there is no significant gain from using SLI-Ready memory compared to conventional manually optimized memory.

What is ECC memory?

ECC (Error Correct Code - error detection and correction) is used to correct random memory errors caused by various external factors, and is an improved version of the "parity check" system. Physically, ECC is implemented as an additional 8-bit memory chip installed next to the main ones. Thus, ECC modules are 72-bit (as opposed to standard 64-bit modules). Some types of memory (Registered, Full Buffered) are available only in ECC version.

What is Registered Memory?

Registered (registered) memory modules are mainly used in servers that work with large amounts of RAM. All of them have ECC, i.e. are 72-bit and, in addition, contain additional register chips for partial (or full - such modules are called Full Buffered, or FB-DIMM) data buffering, thereby reducing the load on the memory controller. Buffered DIMMs are generally incompatible with non-buffered ones.

Is it possible instead conventional memory use Registered and vice versa?

Despite the physical compatibility of the connectors, regular unbuffered memory and Registered memory are not compatible with each other and, accordingly, using Registered memory instead of regular memory and vice versa is impossible.

What is SPD?

Any DIMM memory module has a small SPD (Serial Presence Detect) chip, in which the manufacturer records information about the operating frequencies and corresponding delays of the memory chips necessary to provide normal operation module. Information from the SPD is read by the BIOS during the self-test phase of the computer before booting operating system and allows you to automatically optimize memory access parameters.

Can memory modules of different frequency ratings work together?

There are no fundamental restrictions on the operation of memory modules of different frequency ratings. In this case (at auto tuning memory according to SPD data), the speed of the entire memory subsystem will be determined by the speed of the slowest module.

Yes, you can. The high standard clock frequency of the memory module does not affect its ability to work at lower clock frequencies, moreover, due to low timings, which are achievable at low module operating frequencies, the memory latency decreases (sometimes significantly).

How many and what kind of memory modules should be installed in the system board in order for the memory to work in dual-channel mode?

In the general case, to organize memory operation in dual-channel mode, it is necessary to install an even number of memory modules (2 or 4), and in pairs the modules must be of the same size, and preferably (although not necessarily) from the same batch (or, at worst, the same manufacturer). In modern motherboards, the memory slots of different channels are marked with different colors.

The sequence of installing memory modules in them, as well as all the nuances of the operation of this board with various memory modules, are usually detailed in the manual for the motherboard.

Which manufacturers should pay attention to the memory in the first place?

There are several memory manufacturers worthy of a good reputation in our market. These will be, for example, OCZ, Kingston, Corsair, Patriot, Samsung, Transcend brand modules.

Of course, this list is far from complete, but when buying memory from these manufacturers, you can be sure of its quality with a high degree of probability.

As I understand it, his arguments are as follows:

  1. Google didn't use ECC when they built their servers in 1999.
  2. Most RAM errors are systematic errors, not random ones.
  3. RAM errors are rare because Hardware improved.
  4. If ECC memory actually had importance, then it would be used everywhere, not just in servers. Paying for this kind of optional material is clearly too dubious.
Let's go through these arguments one by one:

1. Google didn't use ECC in 1999

If you are doing something just because Google once did it, then try:

A. Place your servers in shipping containers.

Today they still write articles that this is a great idea, although Google just ran an experiment that was regarded as a failure. It turns out that even Google's experiments don't always work out. In fact, their notorious fondness for "breakthrough projects" ("loonshots") means they have more failed experiments than most companies. In my opinion, this is a significant competitive advantage for them. Don't make this advantage bigger than it is by blindly copying failed experiments.

B. Start fires in your own data centers.

Part of Atwood's post discusses how amazing these servers were:

Some may take a look at these early Google servers and see the unprofessionalism regarding the fire hazard. Not me. I see here a visionary understanding of how low-cost, off-the-shelf hardware will shape the modern Internet.

The last part of what was said is true. But in the first part there is some truth. When Google started designing their own boards, one generation of them had a "growth" problem ( ) that caused a non-zero number of fires.

By the way, if you go to Jeff's post and look at the photo referenced in the quote, you will see that there are a lot of jumper cables on the boards. This caused problems and was fixed in the next generation of hardware. You can also see some rather sloppy cabling, which additionally caused problems and was also quickly fixed. There were other problems, but I'll leave them as an exercise for the reader.

C. Create Servers That Injure Your Employees

The sharp edges of one of the generations Google servers earned them a reputation for being made of "razor blades and hate".

D. Create your own weather in your data centers

After talking to the employees of many large technology companies, it seems that most companies were so climate controlled that clouds or fog formed in their data centers. You could call it Google's calculated and devious plan to replicate the Seattle weather to poach Microsoft employees. Alternatively, it could have been a creation plan in the literal sense of " cloud computing". Or maybe not.

Please note that everything indicated by Google tried and then changed. Making mistakes and then fixing them is common in any successful development organization. If you idolize engineering practice, then you should at least hold on to modern practice, and not to what was done in 1999.

When Google used non-ECC servers in 1999, they exhibited a number of symptoms that were eventually found to be memory corruption. Including a search index that returned virtually random results in queries. The actual failure mode here is instructive. I often hear that ECC can be ignored on these machines because errors in individual results are acceptable. But even if you consider random errors to be acceptable, ignoring them means that there is a danger of complete data corruption, unless careful analysis is carried out to make sure that one error can only slightly distort one result.

In studies carried out on file systems ah, it has been repeatedly shown that, despite heroic attempts to create systems that are resistant to a single error, it is extremely difficult to do this. Essentially, every heavily tested file system can have a major failure due to a single error (). I'm not going to attack file system developers. They are better at this kind of analysis than 99.9% of programmers. It's just that the problem has been repeatedly shown to be so difficult that people can't reasonably discuss it, and an automated tool for such analysis is still far from being a simple push of a button. In their Warehouse Computer Handbook, Google discusses error detection and correction, and ECC memory is considered the best option when it is obvious that hardware error correction ( ) must be used.

Google has an excellent infrastructure. From what I've heard about infrastructure at other major tech companies, Google seems to be the best in the world. But that doesn't mean you should copy everything they do. Even if only their good ideas are considered, it makes no sense for most companies to copy them. They created a replacement for the Linux job hook scheduler that uses both hardware runtime information and static traces to allow them to take advantage of the new hardware in Intel server processors, allowing for dynamic cache partitioning across cores. If you use this on all their equipment, then Google saves in a week more money than Stack Exchange has spent on all its machines in its history. Does this mean you have to copy Google? No, unless you've already been hit with manna from heaven, such as having your core infrastructure written in highly optimized C++ rather than Java or (God forbid) Ruby. And the fact is that for the vast majority of companies, writing programs in a language that entails a 20-fold decrease in productivity is a perfectly reasonable decision.

2. Most RAM Errors Are Systematic Errors

The argument against ECC reproduces the following section of the DRAM error study (emphasis added by Jeff):
Our study has several main results. First, we found that approximately 70% of DRAM failures are repetitive (e.g., permanent) failures, while only 30% are intermittent (intermittent) failures. Second, we found that large multi-bit failures, such as failures that affect an entire row, column, or block, account for over 40% of all DRAM failures. Thirdly, we found that almost 5% of DRAM failures affect circuitry at the board level, such as data (DQ) or gate (DQS) lines. Finally, we found that the Chipkill feature reduced the frequency of system failures caused by DRAM failures by a factor of 36.

The quote seems somewhat ironic, as it does not seem to be an argument against ECC, but an argument for Chipkill - a certain ECC class. Putting that aside, Jeff's post indicates that systematic errors are twice as common as random errors. The post then says that they run memtest on their machines when systematic errors occur.

First, the 2:1 ratio isn't large enough to simply ignore random errors. Second, the post implies Jeff's belief that systematic errors are essentially immutable and cannot show up after a while. This is not true. Electronics wear out in the same way that mechanical devices wear out. The mechanisms are different, but the effects are similar. Indeed, if we compare chip reliability analysis with other types of reliability analysis, we can see that they often use the same families of distributions for failure modeling. Third, Jeff's line of reasoning implies that ECC cannot help detect or fix bugs, which is not only wrong, but directly contradicts the quote.

So, how often are you going to run memtest on your machines in an attempt to catch these system errors and how much data loss are you willing to endure? One of the key uses of ECC is not to correct errors, but to signal errors so that hardware can be replaced before “silent corruption” occurs. Who would agree to close everything on the machine every day to run memtest? It would be much more expensive than just buying ECC memory. And even if you could convince me to run a memory test, memtest wouldn't find as many errors as ECC can.

When I was working for a company with a fleet of about a thousand machines, we noticed that we were having strange data integrity check failures, and after about six months, we realized that failures on some machines were more likely than others. These failures were quite rare (maybe a couple of times a week on average), so it took a long time to accumulate information and understand what was happening. Without knowing the cause, parsing the logs to see if the errors were caused by single bit flips (with a high probability) was also non-trivial. We were fortunate that, as a side effect of the process we were using, the checksums were computed in a separate process on a different machine at different times, so that a bug could not corrupt the result and propagate this corruption to the checksum.

If you're just trying to protect yourself with in-memory checksums, there's a good chance that you'll perform a checksum operation on already corrupted data and get the correct checksum of the bad data, unless you're doing some really fancy calculations that give their own checksums. And if you're serious about error correction, then you're probably still using ECC.

Anyway, after completing the analysis, we found that memtest could not detect any problems, but replacing the RAM on bad machines led to a decrease in the error rate by one to two orders of magnitude. Most services don't have the kind of checksums we had; these services will just silently write corrupted data to persistent storage and won't see the problem until the client complains.

3. Due to the development of hardware, errors have become very rare.

The data in the post is not enough for such a statement. Note that as RAM usage increases and continues to increase exponentially, RAM failures must decrease at a greater exponential rate to actually reduce the frequency of data corruption. Also, as the chips keep getting smaller, the elements get smaller, making more topical issues wear, discussed in the second paragraph. For example, with 20nm technology, a DRAM capacitor can accumulate somewhere around 50 electrons, and this number will be less for the next generation of DRAM while continuing to decrease.

Another note: when you pay for ECC, you are not just paying for ECC memory - you are paying for parts (processors, boards) that are of higher quality. This can easily be seen with drive failure rates, and I've heard a lot of people notice this in their personal observations.

To quote publicly available research, as far as I remember, Andrea and Ramsey's group released the SIGMETRICS paper a few years ago, which showed that a SATA drive was 4 times more likely to fail a read than a SCSI drive, and 10 times more likely to have hidden data corruption. . This ratio was maintained even when using discs from the same manufacturer. There is no particular reason to think that the SCSI interface should be more reliable than SATA interface, but it's not about the interface. We are talking about buying highly reliable server components compared to client ones. Perhaps you are not specifically interested in the reliability of the disk, because you have everything on the checksums, and damage is easily found, but there are some types of violations that are more difficult to detect.

4. If ECC memory was really important, then it would be used everywhere, not just in servers.

To paraphrase this argument slightly, we can say that "if this characteristic were really important for servers, then it would be used in non-servers as well." You can apply this argument to quite a lot of server hardware. In fact, this is one of the most frustrating problems facing major cloud providers.

They have enough leverage to get most of the components at the right price. But bargaining will only work where there is more than one viable supplier.

One of the few areas where there are no viable competitors is the manufacturing CPUs and video accelerators. Fortunately for large suppliers, they usually do not need video accelerators, they need processors, a lot - this has long been the case. There were several attempts by processor vendors to enter the server market, but each such attempt always had fatal flaws from the very beginning, making it obvious that it was doomed (and these are often projects that require at least 5 years, i.e. it was necessary to spend a lot of time without confidence in success).

Qualcomm's efforts have received a lot of noise, but when I talk to my contacts at Qualcomm, they all tell me what's done in this moment the chip is intended essentially for sampling. It happened because Qualcomm needed to learn how to make a server chip from all the people it had poached from IBM, and that the next chip would be the first one that could hopefully be competitive. I have high hopes for Qualcomm, and also for ARM's efforts to make good server components, but these efforts have not yet yielded the desired result.

The almost complete unsuitability of current ARM (and POWER) options (aside from hypothetical options for Apple's impressive ARM chip) for most server workloads in terms of performance per dollar of total cost of ownership (TCO) is a topic a little off the beaten track, so I'll leave that for now. another publication. But the point is that Intel has a position in the market that can force people to pay extra for server features. And Intel does it. Also, some features are really more important for servers than for mobile devices with several gigabytes of RAM and an energy budget of several watts, mobile devices that are still expected to periodically crash and reboot.

Conclusion

Should I buy ECC RAM? It depends on many things. For servers it's probably a good option considering costs. It's really hard to do a cost/benefit analysis though, because it's pretty hard to determine the cost of latent data corruption or the cost of risking losing half a year of a developer's time tracking down intermittent crashes, only to find they're caused by non-ECC memory usage.

For desktops, I'm also a supporter of ECC. But if you do not make regular backups, then it is more useful for you to invest in regular backups than in ECC memory. And if you have backups without ECC, then you can easily write corrupted data to the main storage and replicate this corrupted data to the backup.

Thanks to Prabhakar Ragda, Tom Murphy, Jay Weiskopf, Leah Hanson, Joe Wilder and Ralph Corderoy for discussion/comments/corrections. Also, thanks (or maybe not thanks) to Leah for convincing me to write this oral impromptu as a blog post. We apologize for any errors, lack of references, and sublime prose; this is essentially a recording of half the discussion, and I didn't explain the terms, provide links, or check the facts to the level of detail that I usually do.

One funny example is (for me at least) the magical self-healing fusible link. Although there are many implementations, imagine a fusible link on a chip as a kind of resistor. If you pass some current through it, then you should get a connection. If the current is too high, the resistor will heat up and eventually break. This is commonly used to disable elements on chips, or for activities such as setting the clock speed. The basic principle is that after the jumper has burned out, there is no way to return it to its original state.

A long time ago there was a semiconductor manufacturer who was a bit hasty with their manufacturing process and somewhat over-reduced tolerances in a certain technology generation. After a few months (or years), the connection between the two ends of such a jumper was able to reappear and restore it. If you're lucky, such a jumper will be something like the most significant bit of the clock multiplier, which, if changed, will disable the chip. If you're not lucky, it will lead to hidden data corruption.

I heard from many people in different companies about the problems in this technological generation of this manufacturer, so these were not isolated cases. When I say it's funny, I mean it's funny to hear this story in a bar. It's less funny to find out after a year of testing that some of your chips don't work because their jumper settings are meaningless and you need to remake your chip and delay release by 3 months. By the way, this fusible link recovery situation is another example of a class of errors that can be mitigated with ECC.

Is not google problem; I only mention this because a lot of the people I talk to are surprised at how hardware can fail.

If you don't want to dig through the whole book, then here's the snippet:

In a system that can withstand a series of failures at the software level, the minimum requirement for the hardware part is that failures of this part are always detected and reported. software timely enough to allow the software infrastructure to contain them and take appropriate recovery action. It is not necessary for the hardware to explicitly handle all failures. This does not mean that the hardware for such systems should be designed without error correction capability. Whenever functionality bug fixes can be offered at reasonable cost or complexity, supporting them often pays off. This means that if hardware error correction were extremely expensive, then the system might be able to use a cheaper version that only provided detection capabilities. Modern systems DRAM are good example a situation in which powerful error correction can be provided at very low additional cost. However, relaxing the requirement to detect hardware errors would be much more difficult, as it would mean that each software component would be burdened with the need to verify its own correct execution. Early in its history, Google had to deal with servers where DRAM didn't even have parity. The creation of a web search index essentially consists of a very large sort/merge operation using multiple machines at length. In 2000, one of Google's monthly web index updates failed pre-validation when it was discovered that a subset of the queries tested were returning documents, apparently randomly. After some research in new index files a situation was identified that corresponded to fixing a bit to zero in a certain place in data structures, which was a negative side effect of streaming a large amount of data through a faulty DRAM chip. Consistency checks were added to the index data structures to minimize the chance of this problem reoccurring and there have been no further problems of this nature. However, it should be noted that this method does not guarantee 100% error detection in the indexing pass, since not all memory positions are checked - instructions, for example, remain unchecked. This worked because the index data structures were so much larger than all the other data involved in the computation that the presence of these self-monitoring data structures made it very likely that machines with defective DRAM would be identified and excluded from the cluster. The next generation of machines Google already contained memory parity detection, and once the price of ECC memory dropped to competitive levels, all subsequent generations used ECC-DRAM.

Tags: Add tags

#ECC #Registered #Buffered #Parity #SPD

Error Correct Code (ECC)

ECC or Error Correct Code - detecting and correcting errors (other interpretations of the same abbreviation are possible) - an algorithm that replaced "parity check". Unlike the latter, each bit is included in more than one checksum, which allows, in the event of an error in one bit, to restore the error address and correct it. As a rule, errors in two bits are also detected, although they are not corrected. To implement these capabilities, an additional chip is installed on the module and it becomes 72-bit, in contrast to the 64 data bits of a conventional module.

ECC is supported by all modern motherboards, designed for server solutions, as well as some "general purpose" chipsets. Some types (Registered, Full Buffered) are available only in ECC version. It should be noted that ECC is not a panacea for defective memory and is used to correct random errors, reducing the risk of computer malfunctions from accidental changes in the contents of memory cells caused by external factors such as background radiation.

buffered

Buffered - buffered module. Due to their high total electrical capacitance, their long "charging" times result in time consuming write operations. To avoid this, some modules (usually 168-pin DIMMs) are equipped with a special chip (buffer) that stores incoming data relatively quickly, which frees up the controller. Buffered DIMMs are generally incompatible with unbuffered ones. Partially buffered modules are also called "registered" ( Registered), and modules with full buffering (Full Buffered) - FB-DIMM. In this case, "unbuffered" refers to ordinary memory modules without buffering facilities.

Parity

Parity - parity, modules with parity, also parity. A rather old principle of data integrity checking. The essence of the method is that for the data byte at the recording stage, a checksum is calculated, which is stored as a special parity bit in a separate chip. When data is read, the checksum is calculated again and compared with the parity bit. If they match, the data is considered authentic, otherwise a parity error message is generated (usually resulting in a system halt). The obvious disadvantages of the method include the high cost of memory required to store extra parity bits, insecurity against double errors (as well as false positives in case of an error in the parity bit), system shutdown even with a minor error (say, in a video frame). Currently not applicable.

SPD chip

SPD is a chip on a DIMM memory module that contains all the data about it (in particular, information about speed) necessary to ensure normal operation. This data is read at the stage of computer self-testing, long before the operating system is loaded, and allows you to configure memory access settings even if there are different memory modules in the system at the same time. Some motherboards refuse to work with modules that do not have an SPD chip, but such modules are now very rare and are mainly PC-66 modules.

Explain what is “ECC Support” on RAM

  1. memory check for errors
  2. it is an error correction function. such memory is placed on servers, because it is impossible for them to lag, turn off or overload due to errors. for a home computer, this is not a necessary thing, although it is useful. if you decide to install one for yourself, make sure that your motherboard supports this type of RAM with ECC.
  3. So you can limit yourself to the memtest program? Or is this technology constantly monitoring and correcting small values ​​in memory data?
  4. ECC (Error Correct Code) - detecting and correcting errors (other interpretations of the same abbreviation are possible) - an algorithm that replaced the "parity check". Unlike the latter, each bit is included in more than one checksum, which allows, in the event of an error in one bit, to restore the error address and correct it. As a rule, errors in two bits are also detected, although they are not corrected. To implement these capabilities, an additional memory chip is installed on the module and it becomes 72-bit, in contrast to the 64 data bits of a conventional module. ECC is supported by all modern motherboards designed for server solutions, as well as some "general purpose" chipsets. Some types of memory (Registered, Full Buffered) are available only in ECC version. It should be noted that ECC is not a panacea for defective memory and is used to correct random errors, reducing the risk of computer malfunctions from accidental changes in the contents of memory cells caused by external factors such as background radiation.
    Registered memory modules are recommended for use in systems requiring (or supporting) 4 GB or more of RAM. They are always 72 bits, that is, they are ECC modules, and contain additional register chips for partial buffering.
    PLL-Phase Locked Loop - automatic frequency and signal phase control circuit, serves to reduce the electrical load on the memory controller and increase stability when using a large number of memory chips, is used in all buffered memory modules.
    Buffered - buffered module. Due to the high total electrical capacity of today's memory modules, their long “charging” time results in a large amount of time spent on write operations. To avoid this, some modules (usually 168-pin DIMMs) are equipped with a special chip (buffer) that stores incoming data relatively quickly, which frees up the controller. Buffered DIMMs are generally incompatible with unbuffered ones. Modules with partial buffering are also called “Registered” (“Registered”), and modules with full buffering (Full Buffered) - “FB-DIMM”. In this case, “unbuffered” refers to ordinary memory modules without buffering facilities.
    Parity - parity, modules with parity, also parity. A rather old principle of data integrity checking. The essence of the method is that for the data byte at the recording stage, a checksum is calculated, which is stored as a special parity bit in a separate chip. When data is read, the checksum is calculated again and compared with the parity bit. If they match, the data is considered authentic, otherwise a parity error message is generated (usually resulting in a system halt). The obvious disadvantages of the method include the high cost of memory required to store extra parity bits, insecurity against double errors (as well as false positives in case of an error in the parity bit), stopping the system even with a non-fundamental error (say, in a video frame). Currently not applicable.
    SPD is a microchip on a DIMM memory module that contains all the data about it (in particular, information about speed) necessary to ensure normal operation. This data is read at the stage of computer self-testing, long before the operating system is loaded, and allows you to configure memory access settings even if there are different memory modules in the system at the same time. Some motherboards refuse to work with modules that do not have an SPD chip, but such modules are now very rare and are mainly PC-66 modules.
  5. memtest o check may not reveal errors, but a test in memtest -Test 1 Addresstest, ownaddress deep test to detect errors in memory address registration - detects such errors well, so if you have blue screens is it basically a ram or hard drive
  6. They already said here, use windowsfix.ru

ECC (Error Correct Code - error detection and correction) is used to correct random memory errors caused by various external factors, and is an improved version of the "parity check" system.

Physically, ECC is implemented as an additional 8-bit memory chip installed next to the main ones.

Thus, ECC modules are 72-bit (as opposed to standard 64-bit modules).

Some types of memory (Registered, Full Buffered) are available only in ECC version.

Driver AMD Radeon Software Adrenalin Edition 19.9.2 Optional

New AMD driver version Radeon Software Adrenalin Edition 19.9.2 Optional improves performance in Borderlands 3 and adds support for Radeon Image Sharpening.

Cumulative windows update 10 1903 KB4515384 (added)

On September 10, 2019, Microsoft released a cumulative update for Windows 10 version 1903 - KB4515384 with a number of security improvements and a fix for a bug that broke Windows work Search and caused high CPU usage.

Driver Game Ready GeForce 436.30 WHQL

NVIDIA has released the Game Ready GeForce 436.30 WHQL driver package, which is designed for optimization in games: "Gears 5", "Borderlands 3" and "Call of Duty: Modern Warfare", "FIFA 20", "The Surge 2" and "Code Vein", fixes a number of bugs seen in previous releases, and expands the list of displays in the G-Sync Compatible category.

AMD Radeon Software Adrenalin 19.9.1 Edition Driver

First September issue of graphic AMD drivers Radeon Software Adrenalin 19.9.1 Edition is optimized for Gears 5.



Loading...
Top