Presentation of Intel Sandy Bridge processors: model range and architectural features. Intel Sandy Bridge processors - all the secrets From Sandy Bridge to Skylake: comparison of specific performance

Introduction This summer, Intel did a strange thing: it managed to replace two generations of processors focused on common personal computers. First, Haswell was replaced by processors with the Broadwell microarchitecture, but then within just a couple of months they lost their novelty status and gave way to Skylake processors, which will remain the most progressive CPUs for at least another year and a half. This generational leapfrog occurred mainly due to Intel's problems with the introduction of a new 14nm process technology, which is used in the production of both Broadwell and Skylake. Broadwell microarchitecture performance carriers were heavily delayed on their way to desktop systems, and their successors came out on a predetermined schedule, which led to a crumpled announcement of the fifth generation Core processors and a serious reduction in their life cycle. As a result of all these perturbations, in the desktop segment, Broadwell has occupied a very narrow niche of economical processors with a powerful graphics core and is now content with only a small level of sales characteristic of highly specialized products. The attention of the advanced part of users switched to the followers of Broadwell - Skylake processors.

It should be noted that over the past few years, Intel has not pleased its fans at all with an increase in the performance of its products. Each new generation of processors adds only a few percent in specific performance, which ultimately leads to a lack of clear incentives for users to upgrade old systems. But the release of Skylake - the generation of CPUs, on the way to which Intel, in fact, jumped over the step - inspired certain hopes that we would get a really worthwhile update to the most common computing platform. However, nothing like this happened: Intel performed in its usual repertoire. Broadwell was introduced to the public as an offshoot of the mainstream desktop processor line, while Skylake proved marginally faster than Haswell in most applications.

Therefore, despite all expectations, the appearance of Skylake on sale caused many skepticism. After reviewing the results of real tests, many buyers simply did not see the real point in switching to sixth generation Core processors. And indeed, the main trump card of fresh CPUs is primarily a new platform with accelerated internal interfaces, but not a new processor microarchitecture. And this means that Skylake offers little real incentive to upgrade past-generation based systems.

However, we still would not dissuade all users without exception from switching Skylake. The fact is that even though Intel is increasing the performance of its processors at a very restrained pace, since the advent of Sandy Bridge, which are still working in many systems, four generations of microarchitecture have already changed. Each step along the path of progress contributed to the increase in performance, and to this day, Skylake is able to offer a fairly significant increase in performance compared to its earlier predecessors. Just to see this, you need to compare it not with Haswell, but with the earlier representatives of the Core family that appeared before it.

In fact, that's exactly what we're going to do today. With all that said, we decided to see how much the performance of Core i7 processors has grown since 2011, and collected older Core i7s from the Sandy Bridge, Ivy Bridge, Haswell, Broadwell and Skylake generations in a single test. Having received the results of such testing, we will try to understand which processor owners should start upgrading old systems, and which of them can wait until the next generations of CPUs appear. Along the way, we will also look at the performance level of the new Core i7-5775C and Core i7-6700K processors of the Broadwell and Skylake generations, which have not yet been tested in our laboratory.

Comparative characteristics of tested CPUs

From Sandy Bridge to Skylake: Specific Performance Comparison

In order to remember how the specific performance of Intel processors has changed over the past five years, we decided to start with a simple test in which we compared the speed of Sandy Bridge, Ivy Bridge, Haswell, Broadwell and Skylake, reduced to the same frequency 4 .0 GHz. In this comparison, we used the Core i7 processors, that is, quad-core processors with Hyper-Threading technology.

The SYSmark 2014 1.5 comprehensive test was taken as the main test tool, which is good because it reproduces typical user activity in common office applications, when creating and processing multimedia content, and when solving computing problems. The following graphs show the results obtained. For ease of perception, they are normalized, the performance of Sandy Bridge is taken as 100 percent.



The integral indicator SYSmark 2014 1.5 allows us to make the following observations. The transition from Sandy Bridge to Ivy Bridge increased the specific productivity very slightly - by about 3-4 percent. The next move to Haswell was far more rewarding, resulting in a 12 percent improvement in performance. And this is the maximum increase that can be observed on the above graph. After all, Broadwell overtakes Haswell by only 7 percent, and the transition from Broadwell to Skylake increases the specific performance by only 1-2 percent. All the progress from Sandy Bridge to Skylake translates into a 26 percent increase in performance at a constant clock speed.

A more detailed interpretation of the obtained SYSmark 2014 1.5 indicators can be seen in the following three graphs, where the integral performance index is decomposed into components by application type.









Pay attention, most noticeably with the introduction of new versions of microarchitectures, multimedia applications are added to the speed of execution. In them, the Skylake microarchitecture outperforms Sandy Bridge by as much as 33 percent. But in counting problems, on the contrary, progress is manifested least of all. Moreover, with such a load, the step from Broadwell to Skylake even turns into a slight decrease in specific performance.

Now that we have an idea of ​​what happened to the specific performance of Intel processors over the past few years, let's try to figure out what the observed changes were due to.

From Sandy Bridge to Skylake: what has changed in Intel processors

We decided to make the reference point in the comparison of different Core i7 representatives of the Sandy Bridge generation for a reason. It was this design that laid a solid foundation for all further improvement of productive Intel processors up to today's Skylake. Thus, representatives of the Sandy Bridge family became the first highly integrated CPUs in which both computing and graphics cores were assembled in one semiconductor chip, as well as a north bridge with an L3 cache and a memory controller. In addition, for the first time they began to use an internal ring bus, through which the problem of highly efficient interaction of all structural units that make up such a complex processor was solved. All subsequent generations of CPUs continue to follow these universal principles of construction laid down in the Sandy Bridge microarchitecture without any serious adjustments.

The internal microarchitecture of computing cores has undergone significant changes in Sandy Bridge. It not only implemented support for the new AES-NI and AVX instruction sets, but also found numerous major improvements in the depths of the execution pipeline. It was in Sandy Bridge that a separate zero-level cache was added for decoded instructions; appeared absolutely new block command reordering based on the use of a physical register file; branch prediction algorithms have been significantly improved; and in addition, two of the three execution ports for working with data have become unified. Such heterogeneous reforms, carried out at once at all stages of the pipeline, made it possible to seriously increase the specific performance of Sandy Bridge, which immediately increased by almost 15 percent compared to the previous generation Nehalem processors. To this was added a 15% increase in nominal clock frequencies and excellent overclocking potential, resulting in a total family of processors, which is still put in Intel example, as an exemplary embodiment of the "so" phase in the company's pendulum development concept.

Indeed, we have not seen improvements in the microarchitecture after Sandy Bridge that are similar in terms of mass and effectiveness. All subsequent generations of processor designs have made much smaller improvements to the cores. Perhaps this is a reflection of the lack of real competition in the processor market, perhaps the reason for the slowdown in progress lies in Intel's desire to focus on improving graphics cores, or maybe Sandy Bridge just turned out to be such a successful project that its further development requires too much effort.

The transition from Sandy Bridge to Ivy Bridge perfectly illustrates the decline in the intensity of innovation that has occurred. Despite the fact that the next generation of processors after Sandy Bridge was transferred to a new production technology with 22nm standards, its clock speeds did not increase at all. The improvements made in the design mainly affected the more flexible memory controller and bus controller. PCI Express, which received compatibility with the third version this standard. As for the microarchitecture of the computing cores, some cosmetic changes made it possible to speed up the execution of division operations and slightly increase the efficiency of Hyper-Threading technology, and nothing more. As a result, the increase in specific productivity amounted to no more than 5 percent.

At the same time, the introduction of Ivy Bridge brought something that the millionth army of overclockers now bitterly regrets. Starting with the processors of this generation, Intel abandoned the pairing of the CPU semiconductor chip and the cover covering it by means of flux-free soldering and switched to filling the space between them with a polymer thermal interface material with very dubious heat-conducting properties. This artificially worsened the frequency potential and made the Ivy Bridge processors, as well as all their followers, noticeably less overclockable compared to the "oldies" Sandy Bridge, which are very peppy in this regard.

However, Ivy Bridge is just a tick, and therefore no one promised any special breakthroughs in these processors. However, the next generation, Haswell, did not bring any inspiring performance growth, which, unlike Ivy Bridge, is already in the “so” phase. And this is actually a little strange, since there are a lot of various improvements in the Haswell microarchitecture, and they are dispersed in different parts of the execution pipeline, which in total could well increase the overall pace of command execution.

For example, in the input part of the pipeline, branch prediction performance has been improved, and the queue of decoded instructions has been dynamically shared between parallel threads coexisting within Hyper-Threading technology. Along the way, there was an increase in the window of out-of-order execution of commands, which in total should have increased the share of the code executed in parallel by the processor. Directly in the execution unit, two additional functional ports were added, aimed at processing integer commands, servicing branches and saving data. Thanks to this, Haswell was able to process up to eight micro-ops per clock - a third more than its predecessors. Moreover, the new microarchitecture has doubled and throughput cache memory of the first and second levels.

Thus, improvements in the Haswell microarchitecture did not affect only the speed of the decoder, which, it seems, this moment has become the bottleneck in modern Core processors. After all, despite the impressive list of improvements, the increase in specific performance in Haswell compared to Ivy Bridge was only about 5-10 percent. But for the sake of justice, it should be noted that the acceleration is noticeably much stronger on vector operations. And the greatest benefit can be seen in applications using the new AVX2 and FMA commands, support for which has also appeared in this microarchitecture.

Haswell processors, like Ivy Bridge, were also not particularly liked by enthusiasts at first. Especially when you consider the fact that in the original version they did not offer any increase in clock frequencies. However, a year after their debut, Haswell began to seem noticeably more attractive. First, there has been an increase in applications that capitalize on the strengths of this architecture and use vector instructions. Secondly, Intel was able to correct the situation with frequencies. Later versions of Haswell, which received their own code name Devil's Canyon, were able to increase the advantage over their predecessors by increasing the clock speed, which finally broke through the 4 GHz ceiling. In addition, following the lead of overclockers, Intel improved the polymer thermal interface under the processor cover, which made Devil's Canyon more suitable for overclocking. Of course, not as malleable as Sandy Bridge, but nonetheless.

And with such baggage, Intel approached Broadwell. Since the main key feature these processors were supposed to be a new production technology with 14-nm standards, no significant innovations in their microarchitecture were planned - it was supposed to be almost the most banal “tick”. Everything necessary for the success of new products could well be provided by only one thin process technology with second-generation FinFET transistors, which in theory allows reducing power consumption and raising frequencies. However, practical implementation new technology turned into a series of failures, as a result of which Broadwell got only efficiency, but not high frequencies. As a result, those processors of this generation that Intel introduced for desktop systems came out more like mobile CPUs than like followers of the Devil's Canyon business. Moreover, in addition to truncated thermal packages and rolled back frequencies, they differ from their predecessors in a smaller L3 cache, which, however, is somewhat offset by the appearance of a fourth-level cache located on a separate chip.

At the same frequency as Haswell, Broadwell processors show a roughly 7% advantage, provided by both the addition of an additional data caching layer and another improvement in the branch prediction algorithm along with an increase in the main internal buffers. In addition, Broadwell has new and faster execution schemes for multiply and divide instructions. However, all these small improvements are canceled out by the clock speed fiasco, which takes us back to the pre-Sandy Bridge era. So, for example, the older overclocker Core i7-5775C of the Broadwell generation is inferior in frequency to the Core i7-4790K by as much as 700 MHz. It is clear that it is pointless to expect some kind of increase in productivity against this background, if only there were no serious drop in it.

In many ways, it was precisely because of this that Broadwell turned out to be unattractive to the bulk of users. Yes, the processors of this family are highly economical and even fit into a thermal package with 65-watt frames, but who cares, by and large? The overclocking potential of the first generation 14nm CPU turned out to be quite restrained. We are not talking about any work at frequencies approaching the 5 GHz bar. The maximum that can be achieved from Broadwell using air cooling lies in the vicinity of 4.2 GHz. In other words, the fifth generation of Core came out at Intel, at least strange. Which, by the way, the microprocessor giant eventually regretted: Intel representatives note that the late release of Broadwell for desktop computers, its shortened life cycle and atypical characteristics negatively affected the level of sales, and the company does not plan to embark on such experiments anymore.

Against this background, the newest Skylake is presented not so much as a further development of the Intel microarchitecture, but as a kind of work on bugs. Despite the fact that the production of this generation of CPUs uses the same 14nm process technology as in the case of Broadwell, Skylake has no problems with high frequencies. The nominal frequencies of the sixth generation Core processors returned to those indicators that were characteristic of their 22nm predecessors, and the overclocking potential even increased slightly. Overclockers played into the hands of the fact that in Skylake the processor power converter again migrated to the motherboard and thereby reduced the total heat dissipation of the CPU during overclocking. The only pity is that Intel never returned to using an effective thermal interface between the chip and the processor cover.

But as for the basic microarchitecture of computing cores, despite the fact that Skylake, like Haswell, is the embodiment of the “so” phase, there are very few innovations in it. Moreover, most of them are aimed at expanding the input part of the execution pipeline, while the rest of the pipeline remained without any significant changes. The changes relate to improving the performance of branch prediction and improving the efficiency of the prefetch block, and nothing more. At the same time, some of the optimizations serve not so much to improve performance as they are aimed at another increase in energy efficiency. Therefore, one should not be surprised that Skylake is almost the same as Broadwell in terms of its specific performance.

However, there are exceptions: in some cases, Skylake can outperform its predecessors in performance and more noticeably. The fact is that in this microarchitecture the memory subsystem has been improved. The in-processor ring bus became faster, and this ultimately increased the bandwidth of the L3 cache. Plus, the memory controller received support for DDR4 SDRAM memory operating at high frequencies.

But in the end, nevertheless, it turns out, no matter what Intel says about the progressiveness of Skylake, from the point of view of ordinary users this is a rather weak update. The main improvements in Skylake are made in the graphics core and in energy efficiency, which opens the way for such CPUs into fanless tablet form factor systems. Desktop representatives of this generation differ from the same Haswell not too noticeably. Even if we close our eyes to the existence of an intermediate generation of Broadwell, and compare Skylake directly with Haswell, then the observed increase in specific productivity will be about 7-8 percent, which can hardly be called an impressive manifestation of technical progress.

Along the way, it should be noted that the improvement of technological production processes does not live up to expectations. On the way from Sandy Bridge to Skylake, Intel changed two semiconductor technologies and more than halved the thickness of transistor gates. However, the modern 14nm process technology, compared to the 32nm technology five years ago, did not allow increasing the operating frequencies of processors. All Core processors of the last five generations have very similar clock speeds, which, if they exceed the 4 GHz mark, are very insignificant.

For a visual illustration of this fact, you can look at the following graph, which displays the clock frequency of older overclocking Core i7 processors of different generations.



Moreover, the peak clock frequency is not even on Skylake. Haswell processors belonging to the Devil's Canyon subgroup can boast of the maximum frequency. Their nominal frequency is 4.0 GHz, but thanks to the turbo mode in real conditions they are able to accelerate to 4.4 GHz. For modern Skylake, the maximum frequency is only 4.2 GHz.

All this, of course, affects the final performance of real representatives of various CPU families. And then we propose to see how all this affects the performance of platforms built on the basis of the flagship processors of each of the Sandy Bridge, Ivy Bridge, Haswell, Broadwell and Skylake families.

How We Tested

The comparison involved five Core i7 processors of different generations: Core i7-2700K, Core i7-3770K, Core i7-4790K, Core i7-5775C and Core i7-6700K. Therefore, the list of components involved in testing turned out to be quite extensive:

Processors:

Intel Core i7-2600K (Sandy Bridge, 4 cores + HT, 3.4-3.8 GHz, 8 MB L3);
Intel Core i7-3770K (Ivy Bridge, 4 cores + HT, 3.5-3.9 GHz, 8 MB L3);
Intel Core i7-4790K (Haswell Refresh, 4 cores + HT, 4.0-4.4 GHz, 8 MB L3);
Intel Core i7-5775C (Broadwell, 4 cores, 3.3-3.7GHz, 6MB L3, 128MB L4).
Intel Core i7-6700K (Skylake, 4 cores, 4.0-4.2 GHz, 8 MB L3).

CPU cooler: Noctua NH-U14S.
Motherboards:

ASUS Z170 Pro Gaming (LGA 1151, Intel Z170);
ASUS Z97-Pro (LGA 1150, Intel Z97);
ASUS P8Z77-V Deluxe (LGA1155, Intel Z77).

Memory:

2x8 GB DDR3-2133 SDRAM, 9-11-11-31 (G.Skill F3-2133C9D-16GTX);
2x8 GB DDR4-2666 SDRAM, 15-15-15-35 (Corsair Vengeance LPX CMK16GX4M2A2666C16R).

Video card: NVIDIA GeForce GTX 980 Ti (6 GB/384-bit GDDR5, 1000-1076/7010 MHz).
Disk subsystem: Kingston HyperX Savage 480 GB (SHSS37A/480G).
Power supply: Corsair RM850i ​​(80 Plus Gold, 850 W).

Testing was performed on the Microsoft Windows 10 Enterprise Build 10240 operating system using the following set of drivers:

Intel Chipset Driver 10.1.1.8;
Intel Management Engine Interface Driver 11.0.0.1157;
NVIDIA GeForce 358.50 Driver.

Performance

Overall Performance

To assess the performance of processors in common tasks, we traditionally use the Bapco SYSmark test package, which simulates the user's work in real common modern office programs and applications for creating and processing digital content. The idea of ​​the test is very simple: it produces a single metric that characterizes the average weighted speed of a computer during everyday use. After the release of the Windows 10 operating system, this benchmark has been updated once again, and now we use the most latest version– SYSmark 2014 1.5.



When comparing Core i7 of different generations, when they operate in their nominal modes, the results are not at all the same as when compared on a single clock frequency. Still, the real frequency and features of the turbo mode have a fairly significant impact on performance. For example, according to the data obtained, the Core i7-6700K is faster than the Core i7-5775C by as much as 11 percent, but its advantage over the Core i7-4790K is very small - it is only about 3 percent. At the same time, one cannot ignore the fact that the latest Skylake is significantly faster than the processors of the Sandy Bridge and Ivy Bridge generations. Its advantage over the Core i7-2700K and Core i7-3770K reaches 33 and 28 percent, respectively.

A deeper understanding of the SYSmark 2014 1.5 results can provide insight into the performance scores obtained in various system usage scenarios. The Office Productivity scenario models typical office work: word preparation, spreadsheet processing, work with email and visiting Internet sites. The script uses the following set of applications: Adobe Acrobat XI Pro, Google Chrome 32, Microsoft Excel 2013, Microsoft OneNote 2013, Microsoft Outlook 2013, Microsoft PowerPoint 2013, Microsoft Word 2013, WinZip Pro 17.5 Pro.



The Media Creation scenario simulates the creation of a commercial using pre-captured digital images and video. Popular packages are used for this purpose. Adobe Photoshop CS6 Extended, Adobe Premiere Pro CS6 and Trimble SketchUp Pro 2013.



The Data/Financial Analysis scenario is dedicated to statistical analysis and investment forecasting based on some financial model. The scenario uses large amounts of numerical data and two applications Microsoft Excel 2013 and WinZip Pro 17.5 Pro.



The results obtained by us under various load scenarios qualitatively repeat the general indicators of SYSmark 2014 1.5. Only the fact that the Core i7-4790K processor does not look outdated at all attracts attention. It noticeably loses to the latest Core i7-6700K only in the Data/Financial Analysis calculation scenario, and in other cases it is either inferior to its follower by a very inconspicuous amount, or even turns out to be faster. For example, a member of the Haswell family is ahead of the new Skylake in office applications. But processors from older release years, the Core i7-2700K and Core i7-3770K, seem to be somewhat outdated offerings. They lose from 25 to 40 percent to the novelty in different types of tasks, and this, perhaps, is quite sufficient reason for the Core i7-6700K to be considered a worthy replacement.

Gaming Performance

As you know, the performance of platforms equipped with high-performance processors in the vast majority of modern games is determined by the power of the graphics subsystem. That is why, when testing processors, we choose the most processor-intensive games, and measure the number of frames twice. The first pass tests are carried out without turning on anti-aliasing and setting far from the highest resolutions. Such settings allow you to evaluate how well processors perform with a gaming load in general, which means they allow you to speculate about how the tested computing platforms will behave in the future, when faster versions of graphics accelerators appear on the market. The second pass is performed with realistic settings - when choosing FullHD-resolution and the maximum level of full-screen anti-aliasing. In our opinion, these results are no less interesting, as they answer the frequently asked question about what level of gaming performance processors can provide right now - in modern conditions.

However, in this test, we have assembled a powerful graphics subsystem based on the flagship NVIDIA graphics card GeForce GTX 980 Ti. And as a result, in some games, the frame rate showed dependence on processor performance even in FullHD resolution.

Results in FullHD resolution with maximum quality settings


















Typically, the impact of processors on gaming performance, especially when it comes to powerful representatives of the Core i7 series, is negligible. However, when comparing five Core i7 different generations, the results are not at all uniform. Even at the highest quality settings, the graphics of the Core i7-6700K and Core i7-5775C show the highest gaming performance, while the older Core i7 lag behind them. Thus, the frame rate obtained in a system with a Core i7-6700K exceeds the performance of a system based on a Core i7-4770K by an inconspicuous one percent, but the Core i7-2700K and Core i7-3770K processors already seem to be a significantly worse basis for a gaming system. Switching from a Core i7-2700K or Core i7-3770K to the latest Core i7-6700K results in a 5-7 percent increase in fps, which can have quite a noticeable impact on the quality of the gameplay.

You can see all this much more clearly if you look at the gaming performance of processors with reduced image quality, when the frame rate does not rest on the power of the graphics subsystem.

Results at reduced resolution


















The latest Core i7-6700K again manages to show the highest performance among all the latest generations of Core i7. Its superiority over the Core i7-5775C is about 5 percent, and over the Core i7-4690K - about 10 percent. There is nothing strange in this: games are quite sensitive to the speed of the memory subsystem, and it is in this direction that Skylake has made serious improvements. But the superiority of the Core i7-6700K over the Core i7-2700K and Core i7-3770K is much more noticeable. The older Sandy Bridge lags behind the novelty by 30-35 percent, and Ivy Bridge loses to it in the region of 20-30 percent. In other words, no matter how Intel was scolded for too slow improvement of its own processors, the company was able to increase the speed of its CPUs by a third over the past five years, and this is a very tangible result.

Testing in real games is completed by the results of the popular synthetic benchmark Futuremark 3DMark.









They echo the gaming performance and the results that Futuremark 3DMark gives. When the microarchitecture of Core i7 processors was transferred from Sandy Bridge to Ivy Bridge, 3DMark scores increased by 2 to 7 percent. The introduction of the Haswell design and the release of the Devil's Canyon processors added an additional 7-14 percent to the performance of the older Core i7. However, then the appearance of the Core i7-5775C, which has a relatively low clock speed, somewhat rolled back the performance. And the latest Core i7-6700K, in fact, had to take the rap for two generations of microarchitecture at once. The increase in the final 3DMark rating for the new Skylake family processor compared to the Core i7-4790K was up to 7 percent. And in fact, this is not so much: after all, Haswell processors have been able to bring the most noticeable performance improvement over the past five years. The latest generations of desktop processors are indeed somewhat disappointing.

Application Tests

In Autodesk 3ds max 2016 we are testing the final rendering speed. Measures the time it takes to render at 1920x1080 resolution using the mental ray renderer for a single frame of a standard Hummer scene.



Another test of the final rendering is carried out by us using the popular free 3D graphics package Blender 2.75a. In it, we measure the duration of building the final model from Blender Cycles Benchmark rev4.



To measure the speed of photorealistic 3D rendering, we used the Cinebench R15 test. Maxon recently updated its benchmark, and now it again allows you to evaluate the speed of work various platforms when rendering in current versions of the Cinema 4D animation package.



The performance of websites and Internet applications built using modern technologies is measured by us in a new browser Microsoft Edge 20.10240.16384.0. For this, a specialized WebXPRT 2015 test is used, which implements the algorithms actually used in Internet applications in HTML5 and JavaScript.



Processing Performance Testing graphic images takes place in Adobe Photoshop CC 2015. Measured is the average execution time of a test script, which is a creatively reworked Retouch Artists Photoshop Speed ​​Test that includes a typical processing of four 24-megapixel images taken by a digital camera.



Due to numerous requests from amateur photographers, we conducted a performance test in the graphics program Adobe Photoshop Lightroom 6.1. The test scenario includes post-processing and export to JPEG at 1920x1080 resolution and maximum quality of two hundred 12-megapixel RAW images taken with a Nikon D300 digital camera.



Adobe Premiere Pro CC 2015 tests non-linear video editing performance. Measures rendering time to H.264 Blu-ray for a project containing HDV 1080p25 footage with various effects applied.



To measure the performance of processors during information compression, we use WinRAR archiver 5.3, with the help of which we archive a folder with various files with a total volume of 1.7 GB with the maximum compression ratio.



The x264 FHD Benchmark 1.0.1 (64bit) test is used to estimate the speed of transcoding video to H.264 format, based on measuring the time it takes x264 encoder to encode source video to MPEG-4/AVC format with resolution [email protected] and default settings. It should be noted that the results of this benchmark are of great practical importance, since the x264 encoder is the basis of numerous popular transcoding utilities, such as HandBrake, MeGUI, VirtualDub, and so on. We periodically update the encoder used for performance measurements, and version r2538 took part in this testing, which supports all modern instruction sets, including AVX2.



In addition, we added a new x265 encoder to the list of test applications, designed to transcode video into the promising H.265/HEVC format, which is a logical continuation of H.264 and is characterized by more efficient compression algorithms. To evaluate the performance, the original [email protected] Y4M video file that is transcoded to H.265 format with medium profile. The release of the encoder version 1.7 took part in this testing.



The advantage of the Core i7-6700K over its early predecessors in various applications is beyond doubt. However, two types of tasks have benefited most from the evolution that has taken place. Firstly, related to the processing of multimedia content, whether it be video or images. Secondly, final rendering in 3D modeling and design packages. In general, in such cases, the Core i7-6700K outperforms the Core i7-2700K by at least 40-50 percent. And sometimes you can see a much more impressive improvement in speed. So, when transcoding video with the x265 codec, the latest Core i7-6700K gives exactly twice as much performance as the old Core i7-2700K.

If we talk about the increase in the speed of performing resource-intensive tasks that the Core i7-6700K can provide compared to the Core i7-4790K, then there are no such impressive illustrations of the results of the work of Intel engineers. The maximum advantage of the novelty is observed in Lightroom, here Skylake turned out to be one and a half times better. But this is rather an exception to the rule. For most multimedia tasks, however, the Core i7-6700K offers only a 10 percent performance improvement over the Core i7-4790K. And with a load of a different nature, the difference in speed is even less or even absent.

Separately, a few words need to be said about the result shown by the Core i7-5775C. Due to the low clock speed, this processor is slower than the Core i7-4790K and Core i7-6700K. But do not forget that its key characteristic is efficiency. And he is quite capable of becoming one of the the best options in terms of specific performance per watt of electricity consumed. We will easily verify this in the next section.

Energy consumption

Skylake processors are manufactured on a modern 14nm process with second-generation 3D transistors, however, despite this, their TDP has increased to 91W. In other words, the new CPUs are not only “hotter” than 65-watt Broadwells, but also outperform Haswells in terms of calculated heat dissipation, produced using 22-nm technology and coexisting within the 88-watt thermal package. The reason, obviously, is that initially the Skylake architecture was optimized with an eye not to high frequencies, but to energy efficiency and the possibility of using it in mobile devices. Therefore, in order for the desktop Skylake to receive acceptable clock frequencies lying in the vicinity of the 4 GHz mark, the supply voltage had to be turned up, which inevitably affected power consumption and heat dissipation.

However, Broadwell processors did not differ in low operating voltages either, so there is hope that the 91-watt Skylake thermal package was received due to some formal circumstances and, in fact, they will not be more voracious than their predecessors. Let's check!

The new Corsair RM850i ​​digital power supply used by us in the test system allows us to monitor the consumed and output electrical power, which we use for measurements. The following graph shows the total consumption of systems (without a monitor), measured "after" the power supply, which is the sum of the power consumption of all components involved in the system. The efficiency of the power supply itself in this case is not taken into account. To properly assess energy consumption, we have activated the turbo mode and all available energy-saving technologies.



In the idle state, a qualitative leap in the efficiency of desktop platforms occurred with the release of Broadwell. The Core i7-5775C and Core i7-6700K have noticeably lower idle consumption.



But under load in the form of video transcoding, the most economical CPU options are Core i7-5775C and Core i7-3770K. The latest Core i7-6700K consumes more. His energy appetites are at the level of the older Sandy Bridge. True, the new product, unlike Sandy Bridge, has support for AVX2 instructions, which require quite serious energy costs.

The following diagram shows the maximum consumption under the load created by the 64-bit version of the LinX 0.6.5 utility with support for the AVX2 instruction set, which is based on the Linpack package, which has exorbitant energy appetites.



Once again, the Broadwell generation processor shows the wonders of energy efficiency. However, if you look at how much power the Core i7-6700K consumes, it becomes clear that progress in microarchitectures has bypassed the energy efficiency of desktop CPUs. Yes, in the mobile segment with the release of Skylake, new proposals have appeared with an extremely seductive ratio of performance and power consumption, however latest processors for desktops continue to consume about the same as their predecessors consumed five years before today.

conclusions

Having tested the latest Core i7-6700K and compared it with several generations of previous CPUs, we again come to the disappointing conclusion that Intel continues to follow its unspoken principles and is not too eager to increase the speed of desktop processors focused on high-performance systems. And if, compared to the older Broadwell, the new product offers about a 15 percent improvement in performance due to significantly better clock frequencies, then compared to the older, but faster Haswell, it no longer seems to be as progressive. The difference in performance between the Core i7-6700K and Core i7-4790K, despite the fact that these processors are separated by two generations of microarchitecture, does not exceed 5-10 percent. And this is very little so that the older desktop Skylake could be unambiguously recommended for updating existing LGA 1150 systems.

However, it would be worth getting used to such insignificant steps by Intel in the matter of increasing the speed of processors for desktop systems. The increase in the speed of new solutions, which lies approximately in such limits, is a long-established tradition. No revolutionary changes in the computing performance of Intel desktop-oriented CPUs have been happening for a very long time. And the reasons for this are quite understandable: the company's engineers are busy optimizing the developed microarchitectures for mobile applications and, first of all, think about energy efficiency. Intel's success in adapting its own architectures for use in thin and light devices is undeniable, but the adherents of classic desktops only have to be content with small increases in performance, which, fortunately, have not completely disappeared yet.

However, this does not mean at all that the Core i7-6700K can only be recommended for new systems. Owners of configurations based on the LGA 1155 platform with processors from the Sandy Bridge and Ivy Bridge generations may well think about upgrading their computers. Compared to the Core i7-2700K and Core i7-3770K, the new Core i7-6700K looks very good - its weighted average superiority over such predecessors is estimated at 30-40 percent. In addition, processors based on the Skylake microarchitecture boast support for the AVX2 instruction set, which has found wide use in multimedia applications by now, and thanks to this, the Core i7-6700K is much faster in some cases. So, when transcoding video, we even saw cases when the Core i7-6700K was more than twice as fast as the Core i7-2700K!

Skylake processors also have a number of other advantages associated with the introduction of the new LGA 1151 platform that accompanies them. And the point is not so much in the support of DDR4 memory that has appeared in it, but in the fact that the new chipsets of the hundredth series have finally received really high-speed connection with the processor and support for a large number of PCI Express 3.0 lanes. As a result, advanced LGA 1151 systems boast numerous fast interfaces for connecting drives and external devices, which are devoid of any artificial bandwidth restrictions.

Plus, when evaluating the prospects for the LGA 1151 platform and Skylake processors, one more thing needs to be borne in mind. Intel will not be in a rush to bring the next generation of processors known as Kaby Lake to market. According to the available information, representatives of this series of processors in versions for desktop computers will appear on the market only in 2017. So Skylake will be with us for a long time, and the system built on it will be able to remain relevant for a very long period of time.

Splinting in periodontal diseases

Splinting- one of the methods of treatment of periodontal diseases, which reduces the likelihood of loss (removal) of teeth.

The main indication for splinting in orthopedic practice - the presence of pathological tooth mobility. Splinting is also desirable to prevent re-inflammation in periodontal tissues after treatment in the presence of chronic periodontitis.

Tires can be removable and non-removable.
Removable tires can be installed even in the absence of some teeth, create good conditions for oral hygiene, therapy and surgical treatment if necessary.

To the virtues fixed tires include the prevention of periodontal overload in any direction of exposure, which is not provided by removable dentures. The choice of splint type depends on many parameters, and without knowledge of the pathogenesis of the disease, as well as the biomechanical principles of splinting, the effectiveness of treatment will be minimal.

Indications for the use of splinting structures of any type include:

To analyze these parameters, X-ray data and other data are used. additional methods research. At the initial stage of periodontal disease and the absence of pronounced lesions (degeneration) of tissues, splinting can be dispensed with.

To the positive effects of splinting include the following points:

1. The splint reduces the mobility of the teeth. The rigidity of the splint prevents the teeth from loosening, which means it reduces the likelihood of a further increase in the amplitude of the vibrations of the teeth and their loss. Those. the teeth can only move as far as the splint allows.
2. The efficiency of the splint depends on the number of teeth. The more teeth, the greater the effect of splinting.
3. Splinting redistributes the load on the teeth. The main load during chewing will fall on healthy teeth. Loose teeth will be less affected, which gives an additional effect on healing. The more healthy teeth are included in splinting, the more pronounced will be the unloading of mobile teeth. Therefore, if most of the teeth in the mouth are mobile, then the performance of the splint is reduced.
4. Splinting of the anterior teeth (incisors and canines) gives the best results, and the best splints will be those that combine the most teeth. Therefore, ideally, the splint should cover the entire dentition. The explanation is quite simple - from the point of view of stability, it is the arched structure that will be better than the linear one.
5. Due to the lower stability of the linear structure, the splinting of the movable molars is carried out symmetrically on both sides, uniting them with a bridge connecting these two almost linear rows. This design significantly increases the splinting effect. Other possible splinting options are considered depending on the characteristics of the disease.

Permanent tires are not installed for all patients. The clinical picture of the disease, the state of oral hygiene, the presence of dental deposits, bleeding gums, the severity of periodontal pockets, the severity of tooth mobility, the nature of their displacement, etc. are taken into account.

The absolute indication for the use of permanent splinting structures include pronounced tooth mobility with atrophy of the alveolar process, not more than ¼ of the length of the tooth root. With more pronounced changes, preliminary treatment of inflammatory changes in the oral cavity is initially carried out.

The installation of one or another type of tire depends from the severity of atrophy of the alveolar processes of the jaw, degree of tooth mobility, their location, etc. So, with pronounced mobility and atrophy of the bone processes up to 1/3 of the height, fixed prostheses are recommended, in more severe cases, it is possible to use removable and fixed prostheses.

When determining the need for splinting, sanitation of the oral cavity is of great importance: dental treatment, treatment of inflammatory changes, removal of tartar and even the removal of some teeth in the presence of strict indications. All this gives maximum chances for successful splinting treatment.

Fixed splints in orthopedic dentistry

Tires in orthopedic dentistry are used to treat periodontal diseases, in which pathological tooth mobility is detected. The effectiveness of splinting, like any other treatment in medicine, depends on the stage of the disease, and therefore on the timing of the start of treatment. Splints reduce the load on the teeth, which reduces inflammation of the periodontium, improves healing and overall well-being of the patient.

Tires must have the following properties:

Fixed tires include the following types:

Ring tire.
It is a set of soldered metal rings, which, when put on the teeth, provide their strong fixation. The design may have individual features of the technique and materials for manufacturing. The quality of treatment depends on the accuracy of the fit. Therefore, the manufacture of a splint goes through several stages: taking an impression, making a plaster model, making a splint, and determining the amount of processing of the dentition for reliable fixation of the splint.

Half ring tire.
The semicircular splint differs from the annular splint in the absence of a full ring on the outer side of the dentition. This makes it possible to achieve greater aesthetics of the design while observing a technology similar to the creation of an annular tire.

Cap tire.
It is a series of caps soldered together, put on the teeth, covering its cutting edge and the inside (from the side of the tongue). Caps can be cast or made from individual stamped crowns, which are then soldered together. The method is especially good in the presence of full crowns, to which the entire structure is attached.

Inlay tire.
The method resembles the previous one with the difference that the cap-liner has a protrusion that is installed in a recess at the top of the tooth, which enhances its fixation and the entire tire structure as a whole. As in the previous case, the tire is attached to full crowns to give maximum stability to the structure.

Crown and semi-crown splint.
A full-crown splint is used when the gums are in good condition, because. the risk of her injury with a crown is great. Usually, metal-ceramic crowns are used, which have a maximum aesthetic effect. In the presence of atrophy of the alveolar processes of the jaw, equatorial crowns are placed, which do not reach the gums a bit and allow the treatment of the periodontal pocket. A semi-crown splint is a one-piece cast structure or semi-crowns soldered together (crowns only on the inside of the tooth). Such crowns have the maximum aesthetic effect. But the bus requires virtuoso skill, because. preparing and attaching such a tire is quite difficult. To reduce the likelihood of detachment of the semi-crown from the tooth, it is recommended to use pins, which, as it were, “nail” the crown to the tooth.

Interdental (interdental) splint.
The modern version of the splint according to the method is the connection of two adjacent teeth with special implantable inserts that mutually strengthen the adjacent teeth. Various materials can be used, but recently preference has been given to photopolymers, glass ionomer cement, and composite materials.

Tire Treiman, Weigel, Struntz, Mamlok, Kogan, Brun and others. Some of these "nominal" tires have already lost their relevance, some have been upgraded.

Fixed prosthesis splints are a special type of tire. They combine the solution of two problems: the treatment of periodontal diseases and the prosthetics of missing teeth. At the same time, the splint has a bridge structure, where the main chewing load falls not on the prosthesis itself in the place of the missing tooth, but on the supporting areas of adjacent teeth. Thus, there are quite a lot of options for splinting with non-removable structures, which allows the doctor to choose a technique depending on the characteristics of the disease, the condition of a particular patient, and many other parameters.

Removable splints in orthopedic dentistry

Splinting with removable structures can be used both in the presence of an integral dentition, and in the absence of some teeth. Removable splints do not usually reduce tooth mobility in all directions, but the positive aspects include the absence of the need for grinding or other processing of the teeth, the creation of good conditions for oral hygiene, as well as treatment.

With the preservation of the dentition, the following are used tire types:

Tire Elbrecht.
The frame alloy is elastic, but strong enough. This provides protection against the mobility of the dentition in all directions, except for the vertical, i.e. does not provide protection during chewing load. That is why such a tire is used in the initial stages of periodontal disease, when a moderate chewing load does not lead to the progression of the disease. In addition, the Elbrecht splint is used in the presence of tooth mobility of the 1st degree (minimal mobility). The splint can have an upper (near the top of the tooth), middle or lower (basal) location, and the splint can also be wide. The type of fastening and the width of the tire depend on the specific situation, and therefore it is selected by the doctor individually for each patient. It is possible to take into account the appearance of artificial teeth to change the design.

Tire Elbrecht with T-shaped clasps
in the region of the anterior teeth.

This design allows for additional fixation of the dental arch. However, this design is suitable only with minimal tooth mobility and the absence of pronounced periodontal inflammation, because such a design can cause additional trauma to the periodontium in the presence of pronounced inflammatory changes.
Removable splint with molded mouth guard.
This is a modification of the Elbrecht splint that reduces the mobility of the incisors and canines in the vertical (chewing) direction. Protection is provided by the presence of special caps in the area of ​​the front teeth, which reduce the chewing load on them.

Circular tire.
It can be normal or with claw-like processes. It is used for non-expressed tooth mobility, tk. a significant deviation of the teeth from its axis leads to difficulties when trying to put on or take off the prosthesis. With a significant deviation of the teeth from their axis, the use of collapsible structures is recommended.
In the absence of some teeth, removable dentures can also be used.

Given the fact that tooth loss can provoke periodontal disease, it becomes necessary solution two tasks: the replacement of a lost tooth and the use of splinting as a means of preventing periodontal disease. Each patient will have his own characteristics of the disease, therefore, the design features of the tire will be strictly individual. Quite often, prosthetics with temporary splinting are allowed to prevent the development of periodontal disease or other pathology. In any case, it is required to plan measures that contribute to the maximum therapeutic effect in this patient. So, the choice of splint design depends on the number of missing teeth, the degree of deformation of the dentition, the presence and severity of periodontal diseases, age, pathology and type of bite, oral hygiene and many other parameters.

In general, in the absence of several teeth and severe periodontal pathology, preference is given to removable dentures. The design of the prosthesis is selected strictly individually and requires several visits to the doctor. Detachable design requires careful planning and a specific sequence of actions:

Diagnosis and examination of the periodontium.
Preparing the surface of the teeth and taking impressions for the future model
Model study and tire design planning
Tire Wax Modeling
Receiving a mold and checking the accuracy of the framework on a plaster model
Checking the splint (prosthesis splint) in the oral cavity
Tire finishing (polishing)

Not all work steps are listed here, but even this list indicates the complexity of the procedure for manufacturing a removable splint (prosthesis splint). The complexity of manufacturing explains the need for several sessions of work with the patient and the length of time from the first to the last visit to the doctor. But the result of all efforts is always the same - the restoration of anatomy and physiology, leading to the restoration of health and social rehabilitation.

source: www.DentalMechanic.ru

Interesting articles:

Eliminate menstruation problems from baldness

id="0">According to German scientists, the plant, which was used by the American Indians to normalize the menstrual cycle, can get rid of ... baldness.

Ruhr University researchers say black cohosh is the first known herbal ingredient that can stop hormonal hair loss and even promote hair growth and thickness.

A substance such as estrogen, a female hormone, has been used by the Indians for generations and is still sold in the United States as a homeopathic remedy for rheumatism, back pain, and menstrual irregularities.

Black cohosh grows in the east North America and reaches three meters in height.

According to the researchers, a new gentle testing system was used to test the effect of the drug. The test animals were guinea pigs. Now they are probably distinguished by increased shaggyness.

Neurosurgical treatment of neurological complications of herniated lumbar discs

id="1">

K.B. Yrysov, M.M. Mamytov, K.E. Estemesov.
Kyrgyz State Medical Academy, Bishkek, Kyrgyz Republic.

Introduction.

Discogenic sciatica and other compression complications of herniated lumbar discs occupy a leading position among diseases of the peripheral nervous system. They make up 71-80% of the total number of these diseases and 11-20% of all diseases of the central nervous system. This indicates that the pathology of the lumbar discs is significantly common among the population, affecting people of predominantly young and able-bodied (20-55 years) age, leading them to temporary and/or permanent disability. .

Separate forms of discogenic lumbosacral radiculitis often proceed atypically and their recognition causes considerable difficulties. This applies, for example, to radicular lesions in herniated lumbar discs. More serious complications may arise if the root is accompanied and compressed by an additional radiculo-medullary artery. Such an artery takes part in the blood supply to the spinal cord, and occlusion of it can cause a heart attack with a length of several segments. In this case, true cone, epicone or combined cone-epicon syndromes develop. .
It cannot be said that little attention is paid to the treatment of herniated lumbar discs and their complications. In recent years, numerous studies have been carried out with the participation of orthopedists, neuropathologists, neurosurgeons, radiologists and other specialists. Facts of paramount importance were obtained, which forced us to evaluate and rethink a number of provisions of this problem in a different way.

However, there are still opposing views on many theoretical and practical issues, in particular, the issues of pathogenesis, diagnosis and selection of the most appropriate methods of treatment require further study.

The aim of this work was to improve the results of neurosurgical treatment and achieve a stable recovery of patients with neurological complications of herniated lumbar intervertebral discs by improving topical diagnostics and surgical methods of treatment.

Material and methods.

For the period from 1995 to 2000. we examined and operated on 114 patients with neurological complications of herniated lumbar intervertebral discs using the posterior neurosurgical approach. Among them were 64 men, 50 women. All patients were operated on using microneurosurgical techniques and instruments. The age of patients varied from 20 to 60 years, patients aged 25-50 years predominated, mostly males. The main group consisted of 61 patients who, in addition to severe pain syndrome, had acute or gradually developed motor and sensory disorders, as well as gross dysfunction of the pelvic organs, operated using extended approaches such as hemi- and laminectomy. The control group consisted of 53 patients operated on by interlaminar access.

Results.

The clinical features of neurological complications of herniated lumbar intervertebral discs were studied and the characteristic clinical symptoms of lesions of the spinal roots were identified. 39 patients were characterized by a special form of discogenic radiculitis with a peculiar clinical picture, where paralysis of the muscles of the lower extremities came to the fore (in 27 cases - bilateral, in 12 - unilateral). The process was not limited to the cauda equina, and spinal symptoms were also detected.
In 37 patients, damage to the conus of the spinal cord was noted, where the characteristic clinical symptoms were loss of sensitivity in the perineal region, anogenital paresthesia, and dysfunction of the pelvic organs of the peripheral type.

The clinical picture in 38 patients was characterized by the phenomena of myelogenous intermittent claudication, against which paresis of the feet joined; fascicular twitching of the muscles of the lower extremities was noted, there were pronounced dysfunctions of the pelvic organs - urinary and fecal incontinence.
Diagnosis of the level and nature of damage to the roots of the spinal cord by disc herniation was carried out on the basis of a diagnostic complex, which included a thorough neurological examination, radiological (102 patients), radiopaque (30 patients), computed tomography (45 patients) and magnetic resonance imaging (27 patients) research.

When choosing indications for surgery, we were guided by the clinic of neurological complications of lumbar disc herniation, which were identified during a thorough neurological examination. The absolute indication was the presence of cauda equina root compression syndrome in patients, the cause of which was the prolapse of a disc fragment with a median location. At the same time, dysfunction of the pelvic organs predominated. The second indisputable indication was the presence of movement disorders with the development of paresis or paralysis of the lower extremities. The third indication was the presence of a severe pain syndrome that was not amenable to conservative treatment.

Neurosurgical treatment of neurological complications of herniated lumbar intervertebral discs consisted in the elimination of those pathologically altered structures of the spine that directly caused compression or reflex vascular-trophic pathology of the cauda equina roots; vessels that go as part of the root and participate in the blood supply to the lower segments of the spinal cord. Pathologically altered anatomical structures of the spine included elements of a degenerated intervertebral disc; osteophytes; hypertrophy of the yellow ligament, arches, articular processes; varicose veins of the epidural space; pronounced cicatricial adhesive epiduritis, etc.
The choice of approach was based on the fulfillment of the basic requirements for surgical intervention: minimal trauma, maximum visibility of the object of intervention, ensuring the lowest likelihood of intra- and postoperative complications. Based on these requirements, in the neurosurgical treatment of neurological complications of herniated lumbar intervertebral discs, we used extended posterior approaches such as hemi- and laminectomy (partial, complete) and laminectomy of one vertebra.

In our study, out of 114 operations for neurological complications of herniated lumbar intervertebral discs, in 61 cases it was necessary to consciously go for extended operations. Preference was given to hemilaminectomy (52 patients), laminectomy of one vertebra (9 patients) over interlaminar access, which was used in 53 cases and served as a control group for comparative evaluation of the results of surgical treatment (Table 1).

In all cases of surgical interventions, we had to separate cicatricial adhesive epidural adhesions. This circumstance is of particular importance in neurosurgical practice, given that the surgical wound is characterized by considerable depth and relative narrowness, and the neurovascular elements of the spinal motion segment, which are extremely important in terms of functional significance, are involved in the cicatricial adhesive process.

Table 1. The volume of surgical intervention depending on the localization of the disc herniation.

Localization of disc herniation

Total

ILE

GLE

LE

posterolateral

Paramedian

Median

Total

Abbreviations: ILE-interlaminectomy, GLE-hemilaminectomy, LE-laminectomy.

The assessment of the immediate results of neurosurgical treatment was carried out according to the following scheme:
-Good: no pain in the lower back and legs, complete or almost complete recovery of movements and sensitivity, good tone and strength of the muscles of the lower extremities, restoration of impaired functions of the pelvic organs, working capacity is fully preserved.

Satisfactory: a significant regression of the pain syndrome, incomplete recovery of movements and sensitivity, good muscle tone in the legs, a significant improvement in the function of the pelvic organs, working capacity is almost preserved or reduced.

Unsatisfactory: incomplete regression of the pain syndrome, motor and sensory disorders persist, the tone and strength of the muscles of the lower extremities are reduced, the functions of the pelvic organs are not restored, the ability to work is reduced or disability.

In the main group (61 patients), the following results were obtained: good - in 45 patients (72%), satisfactory - in 11 (20%), unsatisfactory - in 5 patients (8%). Among the last 5 patients, the operation was performed within 6 months. up to 3 years from the moment of development of complications.

In the control group (53 patients), the immediate results were: good - in 5 patients (9.6%), satisfactory - in 19 (34.6%), unsatisfactory - in 29 (55.8%). These data made it possible to consider the interlaminar approach in case of neurological complications of herniated lumbar intervertebral discs as ineffective.

When analyzing the results of our study, no serious complications noted in the literature (damage to vessels and abdominal organs, air embolism, necrosis of the vertebral bodies, discitis, etc.) were noted. These complications were prevented by the use of optical magnification, microsurgical instruments, accurate preoperative determination of the level and nature of the lesion, adequate anesthetic support, and early activation of patients after surgery.

Based on the experience of our observations, it has been proved that early surgical intervention in the treatment of patients with neurological complications of lumbar disc herniation gives a more favorable prognosis.
Thus, the use of a complex of methods of topical diagnostics and microneurosurgical techniques in combination with advanced surgical approaches effectively contributes to the restoration of the working capacity of patients, shortening their stay in the hospital, and improving the results of surgical treatment of patients with neurological complications of herniated lumbar intervertebral discs.

Literature:

1. Verkhovsky A. I. Clinical and surgical treatment of recurrent lumbosacral radiculitis // Abstract of the thesis. dis... cand. honey. Sciences. - L., 1983.
2. Gelfenbein M. S. International congress dedicated to the treatment of chronic pain syndrome after operations on the lumbar spine "Pain management"98 "(Failed back surgery syndrome) // Neurosurgery. - 2000. - No. 1-2. - P. 65 .
3. Dolgiy AS, Bodrakov NK Experience of surgical treatment of patients with hernias of the lumbosacral spine in the clinic of neurosurgery // Actual problems of neurology and neurosurgery. - Rostov n / D., 1999. - S. 145.
4. Musalatov Kh.A., Aganesov A.G. Surgical rehabilitation of radicular syndrome in osteochondrosis of the lumbar spine (Microsurgical and puncture discectomy). - M.: Medicine, 1998.- 88c.
5. Shchurova E.H., Khudyaev A.T., Shchurov V.A. Informativity of laser Doppler flowmetry in assessing the state of microcirculation of the dural sac and spinal root in patients with lumbar intervertebral hernia. Flowmetry Methodology, Issue 4, 2000, pp. 65-71.
6. Diedrich O, Luring C, Pennekamp PH, Perlick L, Wallny T, Kraft CN. Effect of posterior lumbar interbody fusion on the lumbar sagittal spinal profile. Z Orthop Ihre Grenzgeb. 2003 Jul-Aug;141(4):425-32.
7. Hidalgo-Ovejero AM, Garcia-Mata S, Sanchez-Villares JJ, Lasanta P, Izco-Cabezon T, Martinez-Grande M. L5 root compression resulting from an L2-L3 disc herniation. Am J Orthop. 2003 Aug;32(8):392-4.
8. Morgan-Hough CV, Jones PW, Eisenstein SM. Primary and revision lumbar discectomy. A 16-year review from one centre. J Bone Joint Surg Br. 2003 Aug;85(6):871-4.
9. Schiff E, Eisenberg E. Can quantitative sensory testing predict the outcome of epidural steroid injections in sciatica? A preliminary study. Anesth Analg. 2003 Sep;97(3):828-32.
10. Yeung AT, Yeung CA. Advances in endoscopic disc and spine surgery: foraminal approach. Surg Technol Int. 2003 Jun;11:253-61.

Mercury in fish is not that dangerous

id="2">The mercury that forms in fish meat is actually not as dangerous as previously thought. Scientists have found that the mercury molecules in fish are not that toxic to humans.

"We have reason to be optimistic about our research," said Graham George, head of research at the Stanford University Radiation Laboratory in California. "The mercury in fish may not be as toxic as many people think, but we still have a lot to learn." before we can make a final decision."

Mercury is the strongest neurotoxin. It enters the body in large quantities, a person may lose sensitivity, a cramp will twist him, problems with hearing and vision will appear, in addition, there is a high probability of a heart attack. Mercury in its pure form cannot enter the human body. As a rule, it ends up there along with the eaten meat of animals that ate mercury-infected plants or drank water that contained mercury molecules.

The meat of predatory marine fish such as tuna, swordfish, shark, lofolatilus, king mackerel, marlin and red snapper, as well as all types of fish that live in polluted waters, most often contains high levels of mercury. By the way, mercury is a heavy metal that accumulates at the bottom of the reservoir where such fish live. Because of this, doctors in the US recommend that pregnant women limit their consumption of these fish.

The consequences of consuming fish high in mercury are not yet clear. However, studies of the population in the area of ​​the Finnish lake, contaminated with mercury, indicate a predisposition of local inhabitants to cardiovascular diseases. In addition, even lower concentrations of mercury are expected to lead to certain disturbances.

Recent studies in the UK on mercury concentrations in toenail tissues and DHA content in fat cells have proven that fish consumption is the main source of mercury ingestion in humans.

A study by experts from Stanford University proves that in the body of fish, mercury interacts with other substances than in humans. As the researchers say, they hope that their development will help create drugs that remove toxins from the body.

Height, weight and ovarian cancer

id="3">A study of 1 million Norwegian women, published in the Journal of the National Cancer Institute on August 20, suggests that high height and a high body mass index during puberty are risk factors for cancer ovaries.

Height has previously been shown to be directly related to the risk of developing malignant tumors, but its association with ovarian cancer has not received much attention. In addition, the results of previous studies have been conflicting, especially regarding the relationship between body mass index and the risk of developing ovarian cancer.

To shed some light on this, a team of researchers from the Norwegian Institute of Public Health, Oslo, analyzed data from approximately 1.1 million women who were followed up over an average of 25 years. Tentatively, by the age of 40, 7882 subjects were diagnosed with ovarian cancer.

As it turned out, the body mass index in adolescence was a reliable predictor of the risk of developing ovarian cancer. Women with body mass index values ​​of 85 or more percentiles in adolescence were 56 percent more likely to develop ovarian cancer than women with an index ranging from 25 to 74 percentiles. It should also be noted that no significant association between the risk of developing ovarian cancer and body mass index in adulthood has been found.

Researchers say that in women younger than 60 years, height, like weight, is also a reliable predictor of the risk of developing this pathology, especially endometrioid ovarian cancer. For example, women who are 175 cm or over are 29 percent more likely to develop ovarian cancer than women who are 160 to 164 cm tall.

Dear girls and women, being graceful and feminine is not only beautiful, but also healthy, in the sense of being healthy!

Fitness and pregnancy

id="4">So, you are used to leading an active lifestyle, regularly attending a sports club... But one fine day you will find out that you will soon become a mother. Naturally, the first thought is that you will have to change your habits and, apparently, give up fitness. But doctors believe that this opinion is erroneous. Pregnancy is not a reason to stop exercising.

I must say that more and more women have recently agreed with this point of view. After all, the performance during pregnancy of certain exercises, selected by the instructor, has absolutely no negative impact on the growth and development of the fetus, and also does not change the physiological course of pregnancy and childbirth.
On the contrary, regular fitness classes increase the physical capabilities of the female body, increase psycho-emotional stability, improve the activity of the cardiovascular, respiratory and nervous systems, have a positive effect on metabolism, as a result of which the mother and her unborn baby are provided with enough oxygen.
Before you start exercising, you need to determine the adaptive capacity for physical activity, take into account the experience of sports activities (a person has been engaged before or not, his “sports experience”, etc.). Of course, for a woman who has never been involved in any kind of sport, physical exercises should only be carried out under the supervision of a doctor (this may be a fitness doctor in a club).
The training program for the expectant mother should include both general developmental exercises and special ones aimed at strengthening the muscles of the spine (especially the lumbar region), as well as certain breathing exercises (breathing skills) and relaxation exercises.
The training program for each trimester is different, taking into account the state of health of the woman.
By the way, many exercises are aimed at reducing the perception of pain during childbirth. You can do them both at special courses for expectant mothers, and in many fitness clubs where there are similar programs. Regular walking also reduces the feeling of discomfort and facilitates the process of childbirth. In addition, as a result of classes, the elasticity and elasticity of the abdominal wall increases, the risk of visceroptosis decreases, congestion in the pelvic area and lower extremities decreases, the flexibility of the spine and joint mobility increase.
And according to studies conducted by Norwegian, Danish, American and Russian scientists, it has been proven that sports activities have a positive effect not only on the woman herself, but on the development and growth of the unborn baby.

Where to begin?
Before starting to exercise, a woman must undergo a medical examination to find out about possible contraindications to physical activity and determine her physical level. Contraindications to classes can be general and special.
General contraindications:
an acute illness
Exacerbation of a chronic disease
decompensation of the functions of any body systems
general severe or moderate condition

Special contraindications:
toxicosis
habitual miscarriage
high number of abortions
all cases of uterine bleeding
· risk of miscarriage
multiple pregnancy
polyhydramnios
entanglement of the umbilical cord
congenital malformations of the fetus
features of the placenta

Next, you should decide what exactly you want to do, whether group training suits you or not. In general, classes can be very different:
special, individual lessons conducted under the supervision of an instructor
group classes in a variety of fitness areas
Relaxing water activities
The most important thing when compiling a training program is the relationship between exercises and gestational age, an analysis of the state of health and processes in each trimester, and the body's reaction to the load.

Features of trimester training
First trimester (up to 16 weeks)
During this period, the formation and differentiation of tissues occurs, the connection of the fetal egg with the mother's body is very weak (and therefore any strong load can cause an abortion).
During this period, the balance of the autonomic nervous system is disturbed, which often leads to nausea, constipation, flatulence, the restructuring of metabolic processes in the direction of storage processes, and the need for oxygen in the tissues of the body increases.
Conducted training should activate the work of the cardiovascular and broncho-pulmonary systems, normalize the function of the nervous system, increase the overall psycho-emotional tone.
During this period, the following are excluded from the complex of exercises:
straight leg raises
two leg lifts together
abrupt transition from a lying position to a sitting position
sharp torso bends
sharp bending of the body

Second trimester (from 16 to 32 weeks)
During this period, the formation of the third circle of blood circulation mother - fetus.
During this period, there may be instability of blood pressure (with a tendency to increase), inclusion in the metabolism of the placenta (the estrogens and progesterones produced by it increase the growth of the uterus and mammary glands), changes in posture (increased lumbar lordosis, pelvic tilt angle and load on the extensors of the back) . There is a flattening of the foot, an increase in pressure in the veins, which can often lead to swelling and dilation of the veins in the legs.
Classes during this period should form and consolidate the skills of deep and rhythmic breathing. It is also useful to do exercises to reduce venous congestion and strengthen the arch of the foot.
In the second trimester, exercises in the supine position are most often excluded.

Third trimester (from 32 weeks to delivery)
During this period, the uterus enlarges, the load on the heart increases, changes occur in the lungs, venous outflow from the legs and small pelvis worsens, the load on the spine and arch of the foot increases.
Classes during this period are aimed at improving blood circulation in all organs and systems, at reducing various congestion, as well as at stimulating work.
intestines.
When compiling a program for the third trimester, there is always a slight decrease in the overall load, as well as a decrease in the load on the legs and the amplitude of the movements of the legs.
During this period, torso forward bends are excluded, and initial position standing can only be used in 15-20% of exercises.

15 tips for exercising during pregnancy
REGULARITY - it is better to train 3-4 times a week (1.5-2 hours after breakfast).
The POOL is a great place for a safe and rewarding workout.
PULSE CONTROL - on average up to 135 beats / min (at 20 years old it can be up to 145 beats / min).
BREATHING CONTROL - a "speaking test" is carried out, that is, during the exercises you must talk calmly.
BASAL TEMPERATURE - no more than 38 degrees.
INTENSIVE LOAD - no more than 15 minutes (intensity is very individual and depends on training experience).
ACTIVITY - training should not start abruptly and end abruptly.
COORDINATION - exercises with high coordination, with a quick change in direction of movement, as well as jumps, pushes, balance exercises, with maximum flexion and extension in the joints are excluded.
HOME POSITION - the transition from horizontal to vertical and vice versa should be slow.
BREATHING - we exclude exercises with straining and holding the breath.
CLOTHING - light, open.
WATER - it is imperative to comply with the drinking regimen.
WORKOUT ROOM - well ventilated and with a temperature of 22-24 degrees.
FLOOR (COVERING OF THE HALL) - must be stable and not slippery.
AIR - daily walks are required.

Holland holds the world championship in liberalism

id="5">This week, Holland will become the first country in the world where hashish and marijuana will be sold in pharmacies by prescription, Reuters reported on August 31.

This humane gesture by the government will help alleviate the suffering of those suffering from cancer, AIDS, multiple sclerosis and various neuralgias. According to experts, more than 7,000 people bought these soft drugs precisely for the purpose of pain relief.

Hashish was used as a pain reliever for over 5,000 years until it was replaced by stronger synthetic drugs. Moreover, the views of physicians on its medical properties diverge: some consider it a natural and therefore more harmless drug. Others claim that hashish increases the risk of depression and schizophrenia. But both those and others agree on one thing: it will bring nothing but relief to terminally ill people.

Holland is generally famous for its liberal views - we recall that it also allowed same-sex marriage and euthanasia to be the first in the world.

Is the heart a perpetual motion machine?

id="6">Scientists from the Proceedings of the National Academy of Sciences claim that stem cells can become a source of myocardiocyte formation in human heart hypertrophy.

Previously, it was traditionally believed that an increase in heart mass in adulthood is possible only due to an increase in the size of myocardiocytes, but not due to an increase in their number. However, more recently, this truth has been shaken. Scientists have found that in particularly difficult situations, myocardiocytes can multiply by division or regenerate. But still, it is not yet clear how exactly the regeneration of heart tissue occurs.

A team of scientists from the New York Medical College, Valhalla studied heart muscle taken from 36 patients with aortic valve stenosis during heart surgery. The material of the heart muscle taken from 12 deceased within the first 24 hours after death served as the control.

The authors note that the increase in heart mass in patients with aortic valve stenosis is due to both an increase in the mass of each myocardiocyte, and an increase in their number in general. By delving into the specifics of the process, the scientists found that new myocardiocytes are formed from stem cells that were meant to be these cells.

It was found that the content of stem cells in the heart tissue of patients with aortic valve stenosis is 13 times higher than in the control group. Moreover, the state of hypertrophy enhances the process of growth and differentiation of these cells. The scientists say, "The most significant finding from this study is that cardiac tissue contains primitive cells that are commonly misidentified as hematopoietic cells due to their similar genetic structure." The regenerative capacity of the heart, due to stem cells, in the case of aortic valve stenosis is approximately 15 percent. Approximately such figures are observed in the case of a heart transplant from a female donor to a male recipient. There is a so-called chimerization of cells, namely, after some time, approximately 15 percent of heart cells have a male genotype.

Experts hope that the data from these studies and the results of previous work on chimerism will arouse even greater interest in the field of heart regeneration.

August 18, 2003, Proc Natl Acad Sci USA.


1. Microarchitecture of Sandy Bridge: briefly

The Sandy Bridge chip is a dual-quad-core 64-bit processor with ●out-of-order execution sequence, ●support for two data streams per core (HT), ● executing four instructions per clock; ● with integrated graphics core and integrated DDR3 memory controller; ● with a new ring bus, ● support for 3- and 4-operand (128/256-bit) AVX (Advanced Vector Extensions) vector commands; the production of which is established on lines in compliance with the norms of the 32-nm technological process of Intel.

So, in one sentence, you can describe the new generation of Intel Core 2 processors for mobile and desktop systems, delivered since 2011.

Intel Core II MP based on Sandy Bridge MA comes in new 1155 contact construct LGA1155 for new motherboards based on Intel 6 Series chipsets with chipsets (Intel B65 Express, H61 Express, H67 Express, P67 Express, Q65 Express, Q67 Express and 68 Express, Z77).


Approximately the same microarchitecture is relevant for server solutions Intel Sandy Bridge-E with differences in the form of a larger number of processor cores (up to 8), processor socket LGA2011, more L3 cache, more DDR3 memory controllers, and PCI-Express 3.0 support.

Previous generation, microarchitecture Westmere was a design from two crystals: ● 32nm processor core and ● additional 45nm "coprocessor" with graphics core and memory controller on board, placed on a single substrate and exchanging data via the QPI bus, i.e. integrated hybrid chip (center).

When creating the MA Sandy Bridge, the developers placed all the elements on a single 32-nm crystal, while abandoning the classic look of the bus in favor of the new ring bus.

essence Sandy architecture Bridge has remained the same - the bet is on increasing the overall performance of the processor by improving the "individual" efficiency of each core.



The structure of the Sandy Bridge chip can be divided into the following essential elements■ Processor cores, ■ Graphics core, ■ L3 cache, and ■ System Agent. Let us describe the purpose and features of the implementation of each of the elements of this structure.

The whole history of the modernization of processor microarchitectures intel latest years tied with sequential integration into a single crystal of an increasing number of modules and functions that were previously located outside the MP: in chipset, on motherboard etc. As the performance of the processor and the degree of integration of the chip increased, the bandwidth requirements of the internal intercomponent buses grew at a faster pace. Previously, they managed with intercomponent buses with a cross topology - and that was enough.

However, the efficiency of such a topology is high only with a small number of components participating in the data exchange. At Sandy Bridge, to improve overall system performance, they turned to ring topology 256-bit interconnect bus based new version QPI(QuickPath Interconnect).

The tire is used for data exchange between chip components:


● 4 x86 MP cores,

● graphics core,

● L3 cache and

● system agent.


The bus consists of 4 32-byte rings:

■ data bus (Data Ring), ■ request bus (Request Ring),

■ Status monitoring buses (Snoop Ring) and ■ Acknowledgment buses (Acknowledge Ring).


Tires are controlled by distributed arbitration communication protocol, while the pipeline processing of requests occurs at the clock frequency of the processor cores, which gives the MA additional flexibility during overclocking. Tire performance is rated at 96 GB/s per connection at clock frequency 3 GHz, which is 4 times higher than the previous generation of Intel processors.

Ring topology and bus organization ensures ●low latency when processing requests, ● maximum performance and ●excellent technology scalability for chip versions with different numbers of cores and other components.

In the future, the ring bus can be "connected" up to 20 processor cores per die, and such a redesign can be done very quickly, in the form of a flexible and responsive response to current market needs.

In addition, the physical ring bus is located directly above the L3 cache blocks in top level plating, which simplifies the design layout and allows the chip to be made more compact.

Term network topology refers to the way computers are connected to a network. You may also hear other names - network structure or network configuration (It is the same). In addition, the concept of topology includes many rules that determine the placement of computers, cable laying methods, methods for placing connecting equipment, and much more. To date, several basic topologies have been formed and settled. Of these, it can be noted tire”, “ring" And " star”.

Bus topology

Topology tire (or, as it is often called common bus or highway ) assumes the use of one cable to which all workstations are connected. The common cable is used by all stations in turn. All messages sent by individual workstations are received and listened to by all other computers connected to the network. From this stream, each workstation selects messages addressed only to it.

Advantages of bus topology:

  • ease of setup;
  • relative ease of installation and low cost if all workstations are located nearby;
  • the failure of one or more workstations does not affect the operation of the entire network.

Disadvantages of bus topology:

  • bus failures anywhere (cable break, network connector failure) lead to network inoperability;
  • difficulty in troubleshooting;
  • low performance - at any given time, only one computer can transmit data to the network, with an increase in the number of workstations, network performance drops;
  • poor scalability - to add new workstations, it is necessary to replace sections of the existing bus.

It was according to the “bus” topology that local networks were built on coaxial cable. In this case, segments of a coaxial cable connected by T-connectors acted as a bus. The bus was laid through all the premises and approached each computer. The side output of the T-connector was inserted into the connector on the network card. Here's what it looked like: Now such networks are hopelessly outdated and everywhere replaced by a twisted-pair “star”, however, equipment for coaxial cable can still be seen in some enterprises.

Topology "ring"

Ring - This is a local network topology in which workstations are connected in series to each other, forming a closed ring. Data is transmitted from one workstation to another in one direction (in a circle). Each PC acts as a repeater, relaying messages to the next PC, i.e. data is transferred from one computer to another as if by relay. If a computer receives data intended for another computer, it transmits them further along the ring, otherwise they are not transmitted further.

Advantages of ring topology:

  • ease of installation;
  • almost complete absence of additional equipment;
  • the possibility of stable operation without a significant drop in the data transfer rate during intensive network loading.

However, the “ring” also has significant drawbacks:

  • each workstation must actively participate in the transfer of information; in the event of failure of at least one of them or a cable break, the operation of the entire network stops;
  • connecting a new workstation requires a short network shutdown, since the ring must be open during the installation of a new PC;
  • complexity of configuration and customization;
  • difficulty in troubleshooting.

Ring network topology is rarely used. It has found its main application in fiber optic networks token ring standard.

Star topology

Star is a local network topology where each workstation is connected to a central device (switch or router). The central device controls the movement of packets in the network. Each computer through network card connected to the switch with a separate cable. If necessary, you can combine several networks with a star topology together - as a result, you will receive a network configuration with treelike topology. Tree topology is common in large companies. We will not consider it in detail in this article.

The star topology has become the main one in the construction of local networks. This happened due to its many advantages:

  • the failure of one workstation or damage to its cable does not affect the operation of the entire network as a whole;
  • excellent scalability: to connect a new workstation, it is enough to lay a separate cable from the switch;
  • easy troubleshooting and network interruptions;
  • high performance;
  • ease of setup and administration;
  • additional equipment is easily integrated into the network.

However, like any topology, the “star” is not without its drawbacks:

  • the failure of the central switch will result in the inoperability of the entire network;
  • additional costs for network hardware– a device to which all network computers will be connected (switch);
  • the number of workstations is limited by the number of ports in the central switch.

Star – the most common topology for wired and wireless networks. An example of a star topology is a cabled network like twisted pair, and a switch as a central device. These networks are found in most organizations.

The capabilities of the Sandy Bridge GPU are generally comparable to those of the previous generation similar decisions Intel, except that now, in addition to the capabilities of DirectX 10, support for DirectX 10.1 has been added, instead of the expected support for DirectX 11. Accordingly, not many applications with OpenGL support are limited to hardware compatibility only with the 3rd version of the specification of this free API.

Nevertheless, there are a lot of innovations in Sandy Bridge graphics, and they are mainly aimed at increasing performance when working with 3D graphics.

The main emphasis in the development of a new graphics core, according to Intel representatives, was made on the maximum use of hardware capabilities for calculating 3D functions, and the same for processing media data. This approach is radically different from the fully programmable hardware model adopted, for example, by NVIDIA, or by Intel itself for the development of Larrabee (with the exception of texture units).

However, in the implementation of Sandy Bridge, the departure from programmable flexibility has its undeniable advantages, due to which more important benefits for integrated graphics are achieved in the form of lower latency when executing operations, better performance against the backdrop of saving energy consumption, a simplified driver programming model, and, importantly, with saving the physical size of the graphics module.

Sandy Bridge's programmable execution shader graphics units, traditionally referred to as Execution Units at Intel (EU), are characterized by increased register file sizes, which makes it possible to achieve efficient execution of complex shaders. Also, in the new execution units, branching optimization has been applied to achieve better parallelization of executable commands.

In general, according to Intel representatives, the new execution units have twice the bandwidth compared to the previous generation of integrated graphics, and the performance of calculations with transcendental numbers (trigonometry, natural logarithms, and so on) due to the emphasis on using the hardware computing capabilities of the model will increase by 4 -20 times.

The internal instruction set, reinforced in Sandy Bridge with a number of new ones, allows most of the DirectX 10 API instructions to be distributed one-to-one, as is the case with the CISC architecture, which results in significantly higher performance at the same clock speed.

Fast access via a fast ring bus to a distributed L3 cache with dynamically configurable segmentation allows you to reduce latency, increase performance and at the same time reduce the frequency of GPU access to RAM.

Ring bus

The entire history of Intel processor microarchitecture upgrades recent years is inextricably linked with the sequential integration into a single chip of an increasing number of modules and functions that were previously located outside the processor: in the chipset, on the motherboard, etc. Accordingly, as the processor performance and the degree of chip integration increased, the bandwidth requirements for internal interconnect buses grew at a faster pace. For the time being, even after the introduction of a graphics chip into the Arrandale/Clarkdale chip architecture, it was possible to manage with intercomponent buses with the usual cross topology - that was enough.

However, the efficiency of such a topology is high only with a small number of components participating in the data exchange. In the Sandy Bridge microarchitecture, to improve the overall performance of the system, the developers decided to turn to the ring topology of a 256-bit interconnect bus (Fig. 6.1), made on the basis of a new version of QPI (QuickPath Interconnect) technology, expanded, refined and first implemented in the architecture of the Nehalem server chip - EX (Xeon 7500), as well as planned for use in conjunction with the Larrabee chip architecture.

The ring bus (Ring Interconnect) in the version of the Sandy Bridge architecture for desktop and mobile systems is used to exchange data between six key components of the chip: four x86 processor cores, a graphics core, an L3 cache, now it is called LLC (Last Level Cache), and system agent. The bus consists of four 32-byte rings: data bus (Data Ring), request bus (Request Ring), status monitoring bus (Snoop Ring) and confirmation bus (Acknowledge Ring), in practice, this actually allows you to share access to the 64-byte interface last level cache into two different packages. The buses are controlled by a distributed arbitration communication protocol, while the requests are pipelined at the clock frequency of the processor cores, which gives the architecture additional flexibility during overclocking. Ring bus performance is rated at 96 GB per second per connection at 3 GHz, effectively four times faster than previous generation Intel processors.

Fig.6.1. Ring bus (Ring Interconnect)

The ring topology and bus organization ensures minimum latency in processing requests, maximum performance and excellent technology scalability for chip versions with different numbers of cores and other components. According to company representatives, in the future, up to 20 processor cores per chip can be “connected” to the ring bus, and such a redesign, as you understand, can be done very quickly, in the form of a flexible and prompt response to current market needs. In addition, the ring bus is physically located directly above the L3 cache blocks in the upper metallization layer, which simplifies the design layout and allows the chip to be made more compact.



Loading...
Top