Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Сурженко.doc
Скачиваний:
3
Добавлен:
20.08.2019
Размер:
53.76 Кб
Скачать

A short reflection on the state of modern mainstream music

Sound has come a long way since the days of using the internal speaker of a PC for nothing more than beeps and boops. CD quality audio is commonplace, yet convenience is frequently prioritised ahead of quality. MP3 files are popular because of their size, and their suitability for storage limited devices. The downside to MP3 is that it's a lossy compression format, meaning that the audio is stripped of data to save space. In theory the data which is trashed is inaudible to humans, but in practise even the best compression methods cause some level of quality degradation.

An alternative to MP3 is FLAC. FLAC, or Free Lossless Audio Codec, is exactly that - a lossless compression codec. Vaguely similar to how zip files work, FLAC compresses audio without permanently removing data. This requires decompression during playback. Unfortunately, despite a significant growth in storage and Internet speeds, MP3 remains the dominant format for digital music.

Formats aren't the only indicator of quality. Apart from the obvious (being the quality of the recording gear/setup and instruments) there's the matter of dynamic range, or perhaps more importantly, dynamic range compression. To put it simply, the dynamic range of an instrument or piece of music is the ratio of the loudest sound to the softest. For example, the softest sound may be the subtle whisper of a backup singer, and the loudest the beat produced by drums.

Dynamic range compression is when you reduce the difference between these soft and loud notes to make everything louder. This is often used in modern mastering to make music sound louder than it should be, to compete on mediums such as radio or television. The end result is a lifeless track, stripped of its musical integrity. So why is it done? It's part of the so-called Loudness War. You can read more about this phenomenon here (including some audio examples), or at Wikipedia here.

Why does all this matter? When reviewing or leisurely listening to a sound device, subjective judgement is based on listening to music. A perfect audio card (if one should ever exist) would reproduce sound exactly as the audio file dictates. However, just like any system, if you put rubbish in, you get rubbish out. This is important, because a good audio device can make bad music sound worse, since it becomes easier to distinguish imperfections which are part of the audio files 'instructions'. For this reason, music will be carefully chosen during the subjective analysis of this card, in addition to a range of headphones.

Cooler Master GeminIi s524 cpu Cooler

I've said before that since the coolers included with retail CPUs are both quiet and effective at stock clock speeds, overclocking is the only reason anyone should buy a third party CPU cooler. Well, this isn't strictly true: not all third party CPU coolers are designed to handle ultra-high overclocks in massive tower systems; sometimes, what you need is a cooler that is more effective than the retail cooler in specific situations, such as the tight confines of a micro-ATX or HTPC case. When airflow's at a premium, a properly designed cooler that can fit into a tight space can be a big help. Benchmark Reviews looks at Cooler Master's new GeminII S524 cooler, which is designed for just such applications.

The GeminII is a "blow down" style cooler, as are the stock Intel and AMD coolers. A fan blows air down over an array of fins, rather than out the back or top of the computer as most third party coolers do.The design used to be more popular, but has largely been replaced by "tower" style coolers which offer more effective cooling since they can be physically larger.

New texts

SSD performance will only get worse as NAND flash circuity gets smaller.

ZoomSSDs are seemingly doomed. Why? Because as circuitry of NAND flash-based SSDs shrinks, densities increase. But that also means issues relating to read and write latency and data errors will increase as well.

"This makes the future of SSDs cloudy," states Laura Grupp, a graduate student at the University of California, San Diego. "While the growing capacity of SSDs and high IOP rates will make them attractive for many applications, the reduction in performance that is necessary to increase capacity while keeping costs in check may make it difficult for SSDs to scale as a viable technology for some applications."

To prove this theory, Grupp teamed up with Steven Swanson, director of UCSD's Non-Volatile Systems Laboratory, and John Davis of Microsoft Research. Using PCIe-based flash cards with a channel speed of 400 MBps based on the Open NAND Flash Interface (ONFI) specification and a standard 96 NAND flash dies, they tested 45 different NAND flash chips from six different vendors that ranged in size from 72-nm to 25-nm.

The group discovered that write speed for pages in a flash block suffered "dramatic and predictable variations" in latency. Even more, the tests showed that as the NAND flash wore out, error rates varied widely between devices. Single-level cell NAND produced the best test results whereas multi-level cell and triple-level cell NAND produced less than spectacular results.

With the testing information at hand, the group fast-forwarded to the year 2024 when NAND flash circuitry is expected to be only 6.5-nm in size. They predicted that read/write latency will double in MLC flash and increase more than 2.5 times for TLC flash. Yet SSDs at that time are expected to feature capacities of 4 TB when using MLC flash, and 16 TB when using TLC flash.

"It's not going to be viable to go past 6.5-nm," Grupp said. "2024 is the end. [People] are used to working with technology that continues to just get better, but with NAND flash we're going to be facing trade-offs as it evolves."

Researchers have developed a technology that could be used for "last mile" requirements to provide high-speed digital communications in terrain that would normally not be suited for broadband installations

The "wireless bridge" connecting two fiber-optic links ran at 220 GHz and achieved a bandwidth of up to 20 billion bits of data per second, which translates to about 18.6 Gbps.

"Instead of investing in the cost of digging trenches in the ground and deploying ducts for the fibers, data is transmitted via the air—over a high-speed wireless link," said Swen Koenig, a researcher at Karlsruhe Institute of Technology's (KIT) Institute of Photonics and Quantum Electronics, who will present the findings at the Optical Fiber Communication Conference and Exposition/National Fiber Optic Engineers Conference, which takes place in Los Angeles from March 4-8. Koenig said that wireless technology is an "inexpensive, flexible, and easy-to-implement solution" for the last mile problem.

In their test setup, the researchers converted an optical signal to an electronic signal and encoded it to be transferred via a 220 GHz radio frequency carrier. The modulated carrier is connected to the antenna which "radiates" the signal and enables the receiving wireless gateway to capture the signal. First experiments succeeded over a distance of just 50 centimeters and later over a distance of 20 meters, about 66 feet. The researchers believe that the range will eventually reach about 20 km, or about 65,600 feet.

Two DDR4 memory modules were shown off at this years ISSCC.

With DDR4 DRAM set to hit the market in 2013, two manufactures took the opportunity at this years International Solid-State Circuits Conference (ISSCC) to demonstrate their DDR4 DRAM. It is expected that DDR4 will represent 50 percent of the market by mid-2015, after its initial launch on the server side in 2013.

DDR4 is set to have a data transfer rates of 2133 MT/s to 4266 MT/s compared to 800 MT/s to 2133 MT/s of DDR3. DDR4 is also expected to have significantly lower voltage requirements than DDR3. It will require between 1.05 V to 1.2 V to operate, whereas DDR3 requires between 1.2 V to 1.5 V. The lower voltage requirement is expected to reduces power consumption by 40 percent compared to a 1.5 V DDR3 module. DDR4 will not be pin compatible with DDR3, which means they will not be backwards compatible.   

Samsung's DDR4 DRAM module can achieve data transfer rates of 2133 Gb/s at 1.2V, compared to 1.35V and 1.5V DDR3 DRAM at an equivalent 30nm-class process technology, with speeds of up to 1.6 Gb/s. Hynix's DDR4 device works at 2400MHz (2400 Mb/s) at 1.2V and processes up to 19.2 GB/s of data per second with a 64-bit I/O. Hynix used its 38nm manufacturing process technology, while Samsung employed the 30nm node instead.

Other manufacturers such as, Elpida, Micron and Nanya, didn't show off their DDR4 prototypes but are expected to by end 2012. With DDR4 set to hit the PC enthusiast market in 2014, are you going to make the jump to DDR4 or wait until it becomes mainstream much like many users did with the switch from DDR2 to DDR3?