Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Захарова реферирование.doc
Скачиваний:
5
Добавлен:
03.12.2018
Размер:
347.65 Кб
Скачать

Орловские итальянцы

Компания Welko первой наладила

выпуск качественной плитки в России

Итальянская компания Welko Industrial spa учредила ЗАО "Велор" и заложила в Орле завод по выпуску керамической плитки в 1988 г. Этот завод до сих пор остается единственным иностранным предприятием такого профиля. Первоначальной его специализацией была плитка для стен. Однако в 1998 г. была пущена линия по выпуску декоративных элементов, в 1999-м - линия по выпуску плитки для пола, а в 2001 г. было начато производство бордюров. Вся продукция продается в России и СНГ под торговой маркой "Керама".

Стратегия работы компании состоит в продаже продукции европейского качества по цене ниже импортных аналогов. Маркетинговые исследования показали, что цена плитки может быть выше российской, но с импортом напрямую конкурировать не должна. Стоимость итало-российской плитки в розничной торговле - 203-280 руб. за 1 кв. м (данные экспертов сети "Старик Хоттабыч"), что дороже российской плитки по цене 150-190 руб., но дешевле импортной из Италии и Испании по цене 400-1000 руб. за 1 кв. м. При относительно низкой цене "Керама" сохраняет качество зарубежной плитки: коэффициент сопротивления износу I.P.E. (показатель качества) у плитки, производимой ЗАО "Велор", максимальный - он равен пяти. Дополнительным конкурентным преимуществом компании является то, что она имеет в России и на Украине сеть из 70 фирменных магазинов. Это позволяет компании жестко контролировать конечную цену плитки, не давая ей выходить из ценовой ниши.

В результате избранной стратегии орловская плитка с итальянскими корнями занимает на рынке прочное положение. Российская продукция нижнего ценового сегмента (до 200 руб. за 1 кв. м), например, из Ростова, имеет низкий коэффициент сопротивления износу и небольшую цветовую гамму. Новый завод в подмосковном Фрязино, построенный компанией "Лира Керамика", которая занимается продажей и производством керамической плитки, только начал свою работу, и его мощность пока меньше, чем у завода ЗАО "Велор" в Орле. Иностранные конкуренты - Porcelanosa, Aparici, PGC, Incoazul Ceramica из Испании; Magica Ceramica и Omega из Италии - продают свою плитку по цене не ниже $11 за 1 кв. м. А конкурирующие с "Керамой" по цене польские компании Opoczno, Tubadzin и литовская фирма Dvarcioniu поставляют свой товар только небольшими партиями.

Компания удачно использовала последствия кризиса 1998 г.: в результате девальвации многие потребители переориентировались на более дешевую продукцию отечественного производства. Были запущены новые линии по выпуску элементов декора, плитки для пола и бордюров. В итоге сегодня, согласно внутренним расчетам, 25% российских керамических плиток и декоративных элементов производятся на орловском заводе. Его годовая мощность составляет 8 млн кв. м плитки для стен, 4,5 млн кв. м плитки для пола и 8 млн декоративных элементов. Общий же выпуск плитки в России в 2000 г., по данным Госкомстата, составил 50 млн кв. м.

The Italians From Oryol

Welko was the first foreign company

to start the production of high-quality tiles in Russia

In 1988, when joint ventures with Soviet and foreign capital were first emerging in the U.S.S.R., Welko Industrial Spa founded the joint-stock, private company VELOR ZAO as well as a ceramic wall-tile factory in the southern Russian town of Oryol. The factory started work in 1992, thus becoming the sole foreign enterprise in the tile industry. Initially, Welko specialized in the production of wall-facing tiles. However, in 1998 Welko was commissioned to produce decorative elements; in 1999, a floor tile production line was put into operation; and in 2001, it started to make border tiles. All Welko's products are being sold in Russia and CIS under the Cerama brand name.

The company's strategy is to offer products of European quality at a price lower than that of imported alternatives. Market research shows that, while the price of Welko's tiles can be higher than the price of Russian tiles, it should not compete directly with the imported tiles. According to data from the Starik Khottabych retail chain, the retail price for the Italian-Russian tile is 203 rubles to 280 rubles per square meter. This exceeds the price of Russian tiles -150 rubles to 190 rubles per square meter-but is lower than the price of tiles imported from Italy and Spain -400 rubles to 1000 rubles per square meter. Even at its relatively low price, Welko manages to keep up the quality of foreign-made tiles: The Welko tile was awarded the highest possible quality index for wear-resistance coefficient. An additional competitive advantage is Welko's chain of 70 retail shops in Russia and Ukraine, which allows it to regulate strictly the final price of a tile, in order to keep in touch with its price niche.

As a result of this chosen strategy, tiles from the Oryol factory - Russian but with an Italian background have a strong market position. Russian products, for example from Rostov-on-Don, that are in the lower price sector - up to 200 rubles per square meter - have a low wear-resistance coefficient and a poor color scale. A new factory of the Lira Ceramica company in Friazino, Moscow region, has just began operation, and its production output is five times lower than that of the Oryol factory. Foreign competitors such as Porcelanosa, Aparici, Pgc, Incoazul Ceramica all from Spain - and Magica Ceramica and Omega from Italy sell tiles at a price not lower than $11 per square meter. Competing with Welko's prices, the Polish companies Opoczno, Tubadzin and the Lithuanian firm Dvarcioniu can only supply goods by small quantities.

As a result of the devaluation that occurred from the 1998 economic collapse, many consumers changed their preferences to cheaper Russian products. However, Welko managed to survive the crisis and new production lines were built that produced decorative elements, floor tiles and border tiles. According to today's internal accounts, Welko makes 25 percent of Russian ceramic tiles and decorative elements. Annual production output at the Oryol factory makes 8 million square meters of wall-facing tiles, 4.5 million square meters of floor tiles and 8 million decorative elements. According to Goskomstat data, in 2000, tile production output made 50 million square meters.

HOLE IN THE HEAD

In 1962, a Peruvian brain surgeon, Dr. Francisc Grana, removed a paralyzing blood clot from beneath the skull of one of his patients. In opening the skull, he employed only stone instruments used by ancient Peruvian physicians. His patient survived the operation and recovered.

Thus Dr. Grana proved what many had known but scarcely be­lieved — that physicians of ancient Peru were able to perform trepanation — or operations in which the skull was opened. Hundreds of ancient Peruvian skulls have been discovered with regularly cut holes. More than half of these skulls have shown signs of regrowth, indicating that the patient survived the operation.

Jurgen Thorwald tells this story in his book Science and Secrets of Early Medicine. He discusses medicine in the ancient societies of six countries: Egypt, Babylonia, India, China, Mexico and Peru.

In our European-centered culture, we like to think that medicine started with the Greeks, and before that all was darkness. Thorwald destroys this notion. The Greeks must have learned much from earlier societies.

The Egyptians used primitive forms of antibiotics, the Babylonians had operations for cataract of the eyes, the Indians knew of skin transplants and plastic surgery.

An examination of mummies shows that hardening of the arteries was very common among the upper classes in Ancient Egypt, even among the young.

One of the reasons for this is that despite the idealized slim portraits that have come down to us, many upper-class Egyptians were probably quite fat from overindulgence in the pleasures of the table. Medical researchers have also found physiological evidence to indicate that many of them also suffered from extreme nervous tension. "Intrigues, struggles for power, wars, religious disputes and internal dissension, attempts at poisoning and assassination and their own craving for excitement, must have caused a considerable part of the Egyptian upper class to lead a nerve-wracking life," Thorwald comments.

It's fair to conclude that nothing among our cures, diseases or even our tensions is exclusively a product of modern life.

YOU CAN LIVE ON YOUR WASTE

Astronauts on long space voyages will be the most spectacular misers of all time.

But "instead of hoarding string and crusts of bread and old newspa­pers, the spacemen will save such commodities as their breath and their perspiration.

The spacemen will conserve everything — literally — in their spacecraft. This will be necessary because expendable supplies for long voyages would be much too heavy and bulky to be stored in the spacecraft.

Enough food, water, and compressed air to sustain a crew of four on an eight-month trip to Mars, for instance, would outweigh the spacecraft itself.

This means that long-distance spacemen will have to drink and re-drink the same water, over and over again. They will have to breathe and re-breathe the same air as long as the trip lasts. And, in essence, they will have to eat and re-eat the same food.

Sounds distasteful? Not really, according to space scientists working on "closed ecology" — the creation of an earth away from earth.

There is a vital interdependence between plants and animals on earth. For instance, plants need carbon dioxide for growth. They can get it from the exhaled breath of animals and humans. At the same time, animals need food which they can get by eating the plants.

In the "closed ecology" of earth there is a continuing cycle of using and re-using the resources available. In the long run, nothing is ever really thrown away.

Studies indicate it will" be practical to create a miniature, self-supporting earth away from earth in which astronauts will be able to survive — even thrive — for very long periods without replenishment of a single molecule of food, air or water from earth.

Drinking the same water again and again may seem repugnant, but it doesn't make any difference where it comes from — water is still H2O. As long as it is purified properly there is no problem. Besides that, people have been re-drinking water for centuries — water made from the same elements present on earth when the dinosaurs drank it and wallowed in it.

Long-range space vehicles thus will have to recapture every molecule of moisture in the spacemen's breath and perspiration, as well as other body wastes, and reprocess the water for drinking.

The means of purification will include distillation plus a technique known as catalytic oxidation which takes advantage of the readily available vacuum of space. Toxic portions of the waste liquid are broken down by catalytic oxidation into simple elements which then are used in other portions of the closed ecological system.

The astronauts' oxygen supply will be replenished by photosynthesis in which plant life absorbs carbon dioxide (exhaled by man) and gives off oxygen plus carbohydrates (food).

This will 'take place in a "space garden" carried in the spacecraft in which algae will take the place of string beans, corn and lettuce. All that is required to make this system work is the sun's energy, without which, of course, farms on earth don't bloom.

Variations of this completely closed system are being developed for trips of intermediate length. On a moon exploration, for example, it would probably be more practical to use a "partially closed ecology".

In such a case, air and water could be purified and reprocessed by the means described above, while enough food for several weeks could be taken without using too much space or adding too much weight.

SPARE PARTS SURGERY

Steady progress is being made toward a medical objective of the highest importance: successful transplanting of life-sustaining organs from one individual to another.

Transplants were pioneered in 1951. For many years, however, the only successful transplants involved identical twins, whose body tissues are alike. When surgeons tried to replace diseased organs with healthy ones from unrelated donors, the recipients' bodies invariably rejected the foreign tissues.

Then, some years later, Dr. Francis D. Moore reported a case of a man who had lived a year on a kidney taken from a completely unrelated person.

Doctors have learned how, through the use of drugs, to control the body's tendency to destroy foreign tissue, a part of the body's defense against disease.

Experiments with animals indicate that some parts of the body, such as legs, can be preserved by freezing. Other parts can be kept for six hours or so by cooling them to just above freezing. This may prove to be a modest first step toward eventually solving the problem of obtaining and keeping a supply of spare body parts until needed.

Now kidney transplants are being performed in dozens of hospitals.

Surgeons are beginning to transplant other organs, too.

At the University of Mississippi surgeons transplanted a lung into a patient whose own lungs were destroyed by cancer and disease. The new lung functioned for 18 days before the patient died, ironically from kidney disease.

Doctors contend that even though some patients may die, their survival for a week or so indicates that successful transplants of lungs and livers are not far away. The fact that the livers functioned for some time after transplant, and in several cases were still functioning well long after the failure of the heart, shows that liver transplants are possible.

At other hospitals, the transplant of limbs, ovaries, pancreas and other organs are under study. Researchers at Cleveland's Metropolitan General Hospital are even looking into the remote possibility of even­tual nerve or brain transplants. Already they have kept monkeys' brains alive for up to 12 hours totally outside the animal's bodies.

Doctors are exploring ways to control rejection. Some surgeons irradiate the transplant area with X-rays, and use chemotherapy.

The problem of establishing a supply of organs and other body parts is formidable.

Sometimes critical minutes elapse, during which the needed organs may deteriorate.

The problems are great indeed. But the promise is greater.

ELECTRICITY FROM NUCLEAR ENERGY

Nuclear power stations differ from the conventional installations in that, instead of burning coal or oil, the heat from nuclear energy is used to boil water and generate steam.

As is known each atom consists of protons (positively charged particles) and neutrons (uncharged particles), constituting the nucleus which is surrounded at relatively vast distances by electrons (negatively charged particles).

Nuclear fission is the process whereby a free neutron is made to penetrate the nucleus so that it is caused to break up. This releases other neutrons and energy in the form of heat.

So great is this nuclear energy potential that the atoms in a piece of uranium the size of a pin-head could produce as much heat as the burning of 5,000 tons of coal.

The atoms of most materials are quite stable, but the nuclei of some very heavy elements are not. If a uranium nucleus is struck by a neutron it is liable to break up, and to release two or three free neutrons. If these are slowed down some of them will be caught by other uranium nuclei, which will then break down and continue the process of chain reaction. This slowing down of the freed neutrons is accomplished by using a moderator.

This is a material which slows down neutrons without capturing them. Graphite is such a material. Uranium fuel is prepared in the form of rods about one inch in diameter. These are encased in thin metal cans, and they are inserted into holes in the graphite about eight inches apart.

It is no good being able to start nuclear reaction and to obtain great heat output unless the process can be controlled. A chain reaction can be started by bringing together a "critical" amount of uranium fuel in a graphite moderator, but there must be at hand a means to reduce the speed of reaction when necessary, and this is done by installing, as part of the reactor, rods of boron steel. Boron has a remarkable capacity to absorb neutrons. When instruments indicate that nuclear fission is proceeding too fast, the boron rods can be dropped into the reactor. They quickly soak up free neutrons, so that the frequency of fission is immediately reduced and a steady rate of operation can be resumed.

The reason for using natural uranium as a fuel is that it is the only naturally occurring material which can produce a controlled chain reaction. Because of the escape of neutrons from the moderator, this process can only take place in a reactor of a certain minimum size. This is known as the critical size. If, in a reactor, the control rods are positioned so that power is neither increasing nor decreasing, the reactor is said to be critical.

NEW EVIDENCE OF INTELLIGENT LIFE ON OTHER WORLDS

Two points on star charts, known only by their catalogue numbers, CTA-21 and CTA-102, may be the sites of fantastically advanced civilization. What's more, they may be trying to get in touch with us.

This idea was proposed by Nikolai S. Kardashev, a Soviet astronomer.

Kardashev made his proposal in the highly respected journal, Astronomichesky Zhurnal, and he gave full credit to the ideas of the many Western astronomers who have done a great deal of thinking about intelligent life on other worlds during the past few years.

What's so special about these two sources, CTA-21 and CTA-102? First, the sources of the radio (microwave) emissions are optically invisible, that is, they are either too small, too dim or too far away to be seen with the best earthbound telescopes. Secondly, the waves originate from tiny "points" in the sky rather than from large expanses of gas as is typical of many radio stars.

But most important, the spectrums of the radio emissions themselves are unlike any others that have yet been found in the heavens, and they are close to what scientists consider the optimum frequency for communication between stars.

Kardashev compared the spectrums of a hypothetical artificial source that he believed would best convey messages between stars with the spectrums of emissions from CTA-21 and CTA-102, the correlation was not quite perfect, but it was much closer than from any other sources yet discovered.

Kardashev's boss, Prof. Iosif Shklovsky, one of Russia's best theoretical astronomers, once proposed that an advanced civilization would attempt to reach others among the stars by using a powerful radio beacon, emitting the same sort of microwaves coming from CTA-21 and CTA-102. This theory is also supported by many US scientists.

Kardashev postulated three levels of civilization, on the basis of the power they can generate.

Type One, like the earth, can generate around 4,000 billion watts of power.

If the rate of power output of the earth continues to grow from 3 to 4 per cent a year, as it has for the last 60 years, in 3,200 years earth will be a Type Two civilization; one that can produce roughly 400 million billion billion watts — roughly the output of the sun.

A Type Two civilization, Kardashev believes, would have enough energy to spare to build beacons that would radiate like CTA-21 and CTA-102.

Type Three civilizations, as postulated by Kardashev, really overwhelm the imagination. They would produce 40 billion billion billion billion watts — the output of an entire galaxy.

Dr. Shklovsky and his associates have often made suggestions about extraterrestrial civilizations. Now it seems that detailed discussions have become completely respectable for all Soviet astronomers.

The same sort of thing happened in the United States in 1961. Up to that time, the subject had been almost solely the province of science-fiction writers.

What if we definitely establish that the emissions from CTA-21 and CTA-102 are beacons of super-civilizations? There still remain enormous difficulties in trying to understand the messages being sent. And because of the distances involved, we probably would not be able to establish two-way communication within this century.

But the mere discovery that there was someone out there trying to get in touch with us would be one of the most thrilling events in the history of man.

VELOCITY HAS ITS LIMITS

Before the Second World War the speed of aircraft was far below the" speed of sound. Today we have supersonic aircraft. Radio waves propagate at the velocity of light. Could we perhaps create "superlight" telegraphy to send signals at velocities greater than the velocity of light? No, that is an impossible thing to do.

Since the experiment disproves absolute nature of time we conclude that signal transmission cannot be instantaneous. The velocity of transmission from one point in space to another cannot be infinite, in other words, cannot be greater than some ultimate value, called the speed limit.

This speed limit concurs with the light velocity.

Indeed, according to the principle of the relativity of motion the laws of nature will be the same for all the laboratories moving relatively to each other (rectilinearly and with the same uniform velocity). The affirmation that no velocity can be greater than the given limit is also the law of Nature and, therefore, the value of the speed limit should be exactly similar in different laboratories. The light velocity, as we know, possesses the same qualities. Thus, the speed of light is not merely the speed of propagation of a natural phenomenon. It plays the important part of being the top velocity.

The discovery of the existence in the Universe of the top velocity is one of the greatest triumphs of human genius and of the experimental capacity of mankind.

In the 19th century physicists were unable to perceive that a top speed existed and that its existence could be proved. Moreover, if they would have stumbled upon it by chance in their experiments, they would not have been sure that it was a law of Nature and not merely the effect of their limited experimental capacity.

The principle of relativity reveals that the existence of a top velocity lies in the very nature of things. To assume that technological development will enable us to attain velocities greater than the velocity of light is just as ridiculous as to suggest that the absence of points on the Earth's surface more than 20 thousand kilometres apart is not a geographical law, but the upshot of our limited knowledge, and to hope that some day, when geography makes further advances, we shall be able to find points on the Earth that are still farther apart.

Light velocity plays such an exceptional part in Nature exactly because it is the top velocity for the propagation of anything. Light either outstrips all other phenomena, or, at the outside, arrives simultaneously with them.

If the Sun should split in two and form two stars, the motion of the Earth would, naturally, suffer a change as well.

The 19th-century physicist, who did not know that a top velocity existed in Nature, would certainly assume that the Earth changed its motion instantly after the Sun split in two. Yet it would have taken light all of eight minutes to cover the distance from the split Sun to the Earth. The change in the Earth's rotary motion would begin eight minutes after the Sun split up. Until that moment, the Earth would continue to move as if the Sun had not split. Anything that may occur with or on the Sun will not affect the Earth or its motion until eight minutes later.

SIZEWELL POWER STATION

All over the world the demand for electricity is steadily rising. Not only do modern industrial methods call for more and more electrically driven machinery but the rising standards of living are reflected in the domestic consumer's increasing use of the many forms of electrical labour-saving devices.

A big modern power station burns enormous quantities of coal; in full production, the furnaces of a station like High Marnham, Nottinghamshire, consume 10,000 tons a day. Economic logic requires that this huge appetite should be met from the country's most productive coalfield — the East Midlands — and stations sited as near as possible to the fuel source. But Britain's population-is increasing most rapidly in the south and there is a consequent sharp rise in demand for electricity from the coal-deficient area lying south of a line drawn from the Bristol Channel to the Wash.

Transport charges for nuclear fuels are negligible and the siting of nuclear power stations is not governed by this economic consideration. Main factors, besides the all-important amenity consideration, affecting the choice of site are the availability of the large quantities of cooling water necessary, geological substrata which can support the very heavy station structure and plant, and a reasonable degree of remoteness, so that, in the extremely unlikely event of a mishap, the temporary evacuation of people living close to the station could be easily achieved.

Sizewell nuclear power station is situated on the Suffolk coast between Aldeburg and Southwold.

The Station, when completed, will have a guaranteed net electrical output of 580 megawatts. The main plant consists of two natural uranium, carbon dioxide gas cooled, graphite moderated reactors, supplying heat to eight boiler units, four of which are associated with each reactor. Both reactors will be housed in one building, a feature which at present - is unique to Sizewell. This has permitted much more compact reactor layout to be adopted, and will result in a saving of reactor building costs. The four boiler units associated with each reactor will be arranged in pairs on opposite sides of the reactor building. The total steam produced, which will exceed 51/2 million lb. per hour, will be passed to two 324^75 megawatt turbo-alternators.

For condensing purposes 25 million gallons of cooling water per hour will be required and this will be drawn to the underground pumphouse from the sea through twin tunnels ten feet diameter, which will run from a point about 1,350 feet offshore. At this point two vertical intake shafts will be raised through the sea bed from each of the tunnels. To handle the screens which will be placed at the top of the intake shafts and to insert the shaft sealing plugs required when tunnel inspection is to be carried out, a crane mounted on a tubular steel structure is being provided. This structure will be built on the beach and subsequently floated out to its offshore location and fixed in position by steel piles. From the pumphouse the water will be circulated in concrete culverts to the condensers and returned through twin outlet tunnels to discharge to the sea 350 feet offshore.

The reactor pressure vessels will be spherical in shape, 63 feet 6 inches in diameter and formed from mild steel plate 4 1/8 inches thick. The plates are pressed and cut to size and, in some instances, welded together in pairs at the manufacturer's works before being despatched to site. On site, the plates are welded together to form rings before they are transferred to within the biological shield of the reactor building by the 400 ton capacity Goliath crane. Ring sections are then manually welded together inside the shield to form the complete sphere. 100 per cent radiographic examination of the weld metal being undertaken.

The completed vessel is to be stress-relieved by installing electric heating elements internally and following this, a pneumatic pressure test is carried out to approximately 1-5 times the designed working pressure.

Within the pressure vessel each reactor core will be built from graphite blocks arranged to form a vertical 24-sided polygon, approximately 49 feet in diameter and 26 feet in height. The core will be penetrated by over 4,000 vertical holes of approximately four inches diameter of which 3,788 holes will be used for fuel channels, the remainder being required for control rods, sector rods and test facilities.

The graphite mass will weigh approximately 3,000 tons and will be supported by a prefabricated diagrid structure the periphery of which will rest upon an internally protruding ring formed by a part of a special cruciform forging which is an integral part of the reactor pressure vessel. The external ring of the cruciform forging will transfer the combined weight of the core and reactor pressure vessel through a vertical cylindrical steel skirt plate 34 feet 8 inches in diameter and 21 feet 6 inches in height to the reactor foundations.

Each fuel channel will contain seven natural uranium rods each of 1 • 1 inches diameter and 41 inches long, sheathed in magnesium alloy cans.

Each reactor and the associated boiler unit will be connected by gas, ducting 6 feet 6 inches in diameter through which CO2 gas at a pressure of approximately 275 pounds per square inch will be passed. The heat produced in the reactor will be absorbed by the CO2 gas and transferred via the gas ducts to the boiler units.

Gas circulation will be effected by four gas blowers, each driven at a constant speed of 1,487 revolutions per minute by an A. C. motor of 9,850 horsepower. The blowers will be vertically mounted directly underneath each of the four boiler units and will pass the gas from the bottom of the heat exchanger to the base of the reactor pressure vessel.

The reactors will be refuelled on-load, the fuel channels being replenished progressively by fuelling machinery operating from the pile cap.

Two 324-75 MW turbo-generators, one associated with each reactor, will produce electrical power. Each turbine will consist of a high pressure cylinder and three double flow low pressure cylinders arranged in line and operating at 3,000 revolutions per minute with a condenser vacuum of 29 inches of mercury. The Station will operate on a dual pressure steam cycle with the turbine high pressure cylinder designed to accept steam at two pressures, high pressure steam being first expanded to the low pressure steam top valve pressure after which point, low pressure steam is admitted and expansion of the combined steam flow takes place to the high pressure cylinder exhaust pressure. To reduce erosion of the last rows of low pressure cylinder blades, live steam re-heating of the high pressure exhaust steam is employed.

The steam after expanding through the low pressure cylinders, is condensed and then pumped out of the condenser by a condensate extraction pump. Heating of the condensate is effected by two low pressure surface type heaters and a direct contact type deaerator. Steam for the two low pressure heaters is bled off at two points from the turbine low pressure cylinder arid at a point in the high pressure cylinder for the deaerator.

A common cooling pond and handling equipment will be provided in which irradiated fuel discharged from the reactors can be stored for several months to allow the high intensity radiation to die down before it is transported in shielded flasks to the Atomic Energy Authority's chemical factory for processing.

Control of the Station will be from a central control room where the main switchgear controls, main running controls for blower speed, control rod position, turbine throttle setting and all important indicating, recording and alarm instrumentation will be located. Starting and supervision of plant is to be carried out locally.

IS THERE A FIFTH FORCE IN THE UNIVERSE?

A new force in the universe may have revealed itself by threatening two of physics' most precious rules. In an experiment last year, the world's most powerful atom-smasher seemed to pulverize the two laws. Scientists are now trying to put them together again by using the same atom-smasher to detect a possible interloper in the first experiment — a previously unknown force.

There are four known forces in the universe; the new one would be the, fifth force. Gravity is the weakest known force, and was the first to be recognized. Newton fairly well defined gravity 300 years ago. It takes a body the size of the earth to make a pint of water weigh a pound.

The electromagnetic force is the other one that influences the man-size world. Ben Franklin first recognized its true nature, and most of its actions had been explained by the end of the nineteenth century. All of its manifestations arise from positive and negative electric charges. An imbalance of charge makes plastic bags stick together or stands your hair on end. When the imbalance becomes too great between the earth and a cloud, lightning strikes. Charges in motion create a magnetic field. Radio, radiant heat, light and X-rays are electromagnetic. In an atom the electromagnetic attraction between positively charged protons in the nucleus and negatively charged electrons keeps the electrons locked in orbit.

Yet, in the nucleus, protons that repel each other exist close together. This is because the electromagnetic force is overcome by the most powerful known force, the nuclear binding force. It has been unleashed in the atomic bomb. In the laboratory physicists can tell the force is at work in a nuclear reaction if the reaction occurs very quickly.

Other nuclear reactions occur very slowly in comparison. Scientists reason that they are caused by a weaker force acting over a long time — a ten-billionth of a second or more. This force is called simply the force of weak interactions, and accounts for radioactive decay, the spontaneous breaking apart of atoms or the particles that make up atoms. It is a troublemaker. It doesn't seem to follow the rules.

Nuclear physics is partly a matter of book-keeping. Some of the book-keeping rules are conservation laws, such as the conservation of energy. The sum of the energy before the reaction must equal the sum of the energy after the reaction.

Another rule is the conservation of parity. Parity is the mathematical expression of the principle that the mirror image of a physical process shows a process that is possible. Switching left and right does not change physical laws, according to this principle.

There is positive parity and negative parity. The sums of parity before the reaction and after the reaction must always be the same. Or almost always. There was one particle called the K. zero meson (K°) (zero because it has no electric charge, and meson because it is a middle-weight among particles) that seemed to decay into particles with both kinds of parity. In this one way it acted as if it were two particles, KI and 2. In all other ways it acted as if it were one. In order to save the law of conservation of parity it would have been convenient to call K° two particles. But in "1956 two young physicists, Chen Yang and Tsung Lee, working at Brookhaven National Laboratory, Upton, Long Island, decided facts "were facts. Parity probably didn't hold for weak interactions. K° was only one particle.

They suggested an experiment to prove their point.

Mrs. Chien-Shung Wu of Columbia U. and a group of low temperature specialists from the National Bureau of Standards carried out the Lee and Yang experiment. They watched the decay of cobalt-60 and found that the mirror image of the decay is of a reaction that never takes place, so is considered impossible. Parity had fallen.

Phisicists quickly caught their balance. If every particle in the mirror image is replaced by its anti-particle, it once again shows a picture of a possible reaction, they noted. Charge conjugation is the name for the act of switching matter and anti-matter in a reaction. Parity (P) alone may not be invariant, but parity and charge conjugation (C) together (CP) are invariant, they figured. In fact, physicists refurbished the old mirror image completely, saying that perhaps they just hadn't recognized what they had seen in a mirror, that the image may have consisted of anti-matter all along.

An anti-particle is like its corresponding particle except that it has the opposite charge or the reverse spin if the particle has charge or spin. A particle and anti-particle can be created as a pair from energy, 'and if they meet they annihilate each other and return to energy. One way of describing an anti-particle mathematically, is as a particle going backwards in time. We are made of matter so we go forward in time. This sounds peculiar, but in relativity there is nothing special about any one direction in time. According to the principle of time reversal invariance (T), one can take a movie of any event, run it backwards and watch perfectly possible happenings. They don't have to be probable. Time reversal invariance means physical laws will remain unchanged if the direction of time is reversed.

Physicists had CP invariance and T invariance separately, and they joined them together into CPT invariance. Supposedly, physical laws would be unchanged in a mirror image of an event in which all "particles were replaced by their anti-particles and time was reversed.

That was how things stood when a Princeton U. team under Val Fitch examined the decay of К2. The combination of a weak interaction and the K2 again made trouble. Fitch was watching for the break-up of K2 into two pi mesons. It had never been seen before, but only a few hundred decays had been checked. Two pi decay would violate CP invariance and send it down the drain. Fitch wanted to put a firmer foundation under CP invariance, so he checked 22,700 decays. Unfortunately, he found 45 two pi decays. CP was lost, and with it either CPT or T. Physicists had a choice. If they would admit a violation of time reversal invariance, they could make CPT add up right. If they would abandon CPT invariance, they could save T. They firmly believed in CPT, and time reversal invariance was vital to relativity. They took neither alternative. Instead, they called on a new force to save the day.

This fifth force, if it exists, is the weakest one, ten billion times weaker than gravity. It may have taken the fifth force of the whole galaxy to influence the experiment. The reason for its action would be because our part of the universe has so much more matter than antimatter. It may have changed some Kl, into Kl which ordinarily decay into two pi mesons.

The effect of the fifth force would increase by the square of the particle energy. With about the same set-up Fitch collected data at higher energies CERN, the European Center for Nuclear Research near Geneva has an alternating gradient synchrotron almost as I powerful as the one at Brookhaven. R. Mermod collected evidence with it.

Both teams are also checking for possible sources of error in the original experiment that would explain the results, but they don't really expect to find any.

HOW CLEAN IS CLEAN?

All things are relative.

The hospital operating room, all its devices for ensuring cleanliness, may also look clean, but it would still not be considered clean enough in which to assemble a delicate gyro or certain high-reliability electronic devices.

The hospital operating room is concerned to great extent with the problem of disinfection, and the use of germicides; the electronics industry has come to be concerned with environment control (temperature, humidity, pressure) plus — the "plus" being the control of air-borne contaminants and process-induced contaminants, also called "particular matter", or matter existing in minute, separate particles in the air. An area so controlled has been called a "dust-free area", but the term is not sufficiently inclusive, since there are many contaminants, including not only dust, but smoke, odors, even noise — because noise is a sound that moves and thus can cause dust particle motion.

The best term for an enclosed area with controlled environment is probably "clean room", though "white room" was also used several years ago, but is not as applicable now, since the dead, white walls, a la hospital operating rooms, looked so sterile they were depressing to the workers. Warmer tones are now used, in pleasing combinations.

Federal Standard 209 defines a clean room quite simply as one in which the environment is controlled — and goes on to set the standards for temperature, humidity, pressure and particulate matter in the air. The first three are comparatively simple. Temperature should be maintained between 67° and 77° F., except for special jobs requiring critical temperatures. Relative humidity should be 45 per cent maximum, generally ± 10 per cent, except for humidity-sensitive applications. Pressure should be maintained above that of surrounding areas, so that all leakage will be outward (in a reversed condition pressure would bring more contaminants into the area).

But the contaminants part of the clean room spec, is much more detailed, and provides a sliding scale for arriving at an answer to the question: "How Clean is Clean?" A rule of thumb answer is that, so far as industry clean rooms are concerned, the fewer contaminant particles present, the cleaner the room. The Federal Standard divides clean rooms into three classes, according to the number of particles of 0.5 micron in size or larger in each cubic foot of air: classes 100, 10,000 and 100,000. A secondary limitation is concerned with particles of 5 micron size or larger per cubic foot: class 10,000 limits the number of this size to 65; class 100,000 to 700. Some industrial firms use their own methods of classifying clean rooms, but most methods still involve varying numbers of particles per cubic foot having dimension greater than x microns (generally 1 to 5), with secondary limitation on number of particles of larger sizes (5, 10 or 20 microns).

In the Federal Standard, class 100 is 100 times cleaner than class 10,000, and is a cleanliness difficult to achieve. The use of the micron, which is a millionth of a meter, is a more convenient notation than the inch, since one micron is equivalent to .00003937 inch (4/100,000 of an inch). To understand the size of a micron, it may be pointed out that a particle of even 50 microns is microscopic. The smoke from the filter end of a cigarette is made up of particles of 10 microns; the human hair (.003 inch), is nearly 80 microns in diameter.

To achieve the standard, the particles must be counted periodically by one or more existing methods. This matter of sampling the clean room air is one of the most important parts of the entire operation. The Federal Standard states that all clean rooms shall use some particle counting method. There are two common methods: (1) by automatic equipment employing light-scattering techniques, for particles of 2 microns and larger; and (2) microscopic counting, for particles 5 microns and larger.

The light-scattering technique provides for a continuous sampling of the air stream. It gives indication of particle size and concentration with instantaneous readout. In this system the light from particles in the sample stream is scattered into a phototube. The light pulses are then converted into voltage pulses, and are sized and counted in the counter's electronic system.

The second counting method uses samples taken either by settling (collecting particles in a dish coated with an adhesive), or by filtration — passing a sample of the air stream through a membrane filter. Either method involves microscopic examination by either an optical or an electron microscope, though particles of one-half .(0.5) micron in size are about the smallest that can be resolved by the optical microscope. The actual count is done by using grid squares or unit areas, counting the particles in them, and then computing the number for the entire field.

The clean room, thus, is an enclosed area with environment controlled and adequate provision for monitoring the controls. All precautions should be taken in its construction—insulation where required, rounded corners, smooth vinyl floors with minimum joints, double-glazed flush windows, fluorescent lighting and emergency exits opening from the inside only. Items peculiar to such rooms include: airlock material pass-throughs, inter-communication between outside and inside, non-dusting type tables and benches and vacuum cleaner connections. For critical operations, a hooded work station with additional air filtering is necessary. This type of individual station is coming more and more into favor in clean rooms, since it offers the dust-free advantages of the "globe box" but without its hindrance to completely free access to devices being worked on.

In addition, the human element involved requires very highly specialized planning. Entrances must be equipped with clothing storage areas, one for street clothing, one for dust-free clothes, with space for changing, shoe-sole cleaner mat and motorized shoe brusher. In a separate compartment is a high-velocity air shower, also called a "man cleaner", with interlocked doors. Such shower provides an air flow (30 mph is adequate) for varying intervals (perhaps 10 to 30 seconds). Clean rooms for more critical operations may have two (or more) air showers, one to be used before putting on the lint-free clothing, one to be used afterward but before entering the regular working area.

Other human factors must be taken into account. To combat claustrophobia — common among clean room beginners and caused by the many restrictions — windows facing outside hallways are necessary. To combat what has been called "clean room blues", some firms use background music systems, either continuously or at intervals.

The worker must become accustomed to the lint-free clothing, which varies in design in accordance with the degree of cleanliness desired. The ultra involves the use of complete overalls, closed at neck and wrists, with bottoms tucked in, and headcovers, shoecovers, gloves or finger cots. Less exacting conditions may require only smocks and headcovers. Neither type of garment, generally, has pockets, except for a slot for a ballpoint pen, which should be the one-piece type.

Clean room clothing is still not perfected. Heavy-weave dacron is uncomfortably warm. As to hats, the women have found that cloth hats are warm, resulting in itching scalp and brittle hair, not to mention ruined hair-dos. A remedy has been found in the use of throw-away lint-free paper hats, with some girls wearing two at a time to create a more fashionable effect. Gloves, too, have been a problem, because of sweating and other discomfort. Nylon or rubber finger covers (cots) may replace them for all but the most critical operations.

The worker, on entering the clean room area, uses the shoe cleaners, dons the lint-free attire, goes through the air shower and through an airlock into the work area. While at work, workers may not smoke, eat, drink, comb hair, or clean their fingernails. Needless roaming is discouraged because it disrupts the airflow and brings dust up from the floor. But work stations have been designed to be comfortable, with normal-level work tables, and padded stenographer-type chairs with back rests.

The "why?" for all these restrictions, regulations and costume standards, is part of the "why?" for clean rooms themselves, with their rigid control of all environmental of contaminants. The developing aerospace industry is the motivating factor, and the objective is a simple one. Contamination can have disastrous effects on reliability. It can increase friction, cause leakages, or erode moving parts.

So the basic clean room objective is the elimination of contaminants. A better statement might be that it is to increase the quality and reliability of critical hardware — particularly items that are subject to failure because of contamination. It is essential that products be free of dust, and nowhere is this need stronger than in the manufacture of missiles, satellites, or spacecraft. An exacting control must be maintained over all environmental factors in the production, assembly, packaging and testing of critical products.

It has been predicted that, within the next five years, every firm associated with missile and space programs will require not just clean rooms, but acres of them. For the successful prosecution of the more and more elaborate, exacting and sophisticated projects daily growing in number, clean will have to be even more than clean—it will have to be immaculate!

MACARONI

Macaroni is made, as a rule, from semolina or farina. Some of the best macaroni products are made entirely from farina, though many of the macaroni manufacturers look upon semolina as the raw material "par excellence" for the production of the highest-grade macaroni, spaghetti, and vermicelli.

Many macaroni manufacturers at a distance from the semolina milling centers are obliged to use considerable flour, as the freight rate on semolina makes its use prohibitive. Some manufacturers, for example, use flour of different grades, for example, straight and/or cut-offs, made from Bluestem wheat, either as such or blended with semolina; others use farina or semolina without any admixture of flour.

Some manufacturers use a 95% hard winter flour instead of semolina.

In general, 100 pounds of flour will make 94 pounds of marketable macaroni products; from 1% to 2% of the macaroni is wasted in the process of manufacture. There is also a loss of about 4% which is largely due to the difference in moisture content of macaroni and the raw material.

The first step in the manufacture of macaroni is the doughing process. For every 100 pounds of semolina or farina some 26 to 30 pounds of water ranging in temperature from 70° to 140° F. are used.

The quantity of water varies with the kind of product to be made and the nature of the raw material, less water being used for vermicelli than for macaroni. No other ingredients (except occasionally a small percentage of salt) are used. After being mixed for 10 to 20 minutes, at a temperature of about 104° F., the smooth, firm dough is transferred to a kneading machine or "Gramola".

The modern kneading machine consists of a revolving circular steel pan 8 feet in diameter, carrying two revolving, corrugated, conical iron workers weighing as much as 3 1/2 tons each. In operation it is similar to a butter worker.

The dough is kneaded for 10 to 20 minutes to thoroughly incorpo-rate the water with the semolina or farina and to produce a uniform, smooth, stiff dough.

When thoroughly kneaded, the dough may be either transferred direct to the press or rolled into sheets, folded into cylinder or cartridge form, and then transferred to the press, which is maintained at a temperature of about 104° F. to keep the dough plastic.

In the vertical press at the bottom of the cylinder is placed a hori-zontal die or perforated plate, called "trafila". The holes in the die for making macaroni vary in size according to the type to be made. Each hole has a small steel rod or pin in the center, which forms the hole in the macaroni. While the dough is divided by the supports of the pin as it enters the die, the tremendous pressure, from 2,500 to 5,000 pounds per square inch, reunites it, and it emerges from the other end of the perforated plate as a perfect tube. The die used in making spaghetti, or solid rod-like macaroni products, has smaller holes without pins.

In general, long-cut macaroni is made in vertical presses, the macaroni being cut by hand into 3-foot lengths and bent over wooden rods for drying.

Horizontal processes are more used in the manufacture of elbows and other short-cut macaroni products. A revolving knife cuts the macaroni at the outer face of the die, the speed of the knives determining the length of the product.

When making products in the shape of animals, alphabets, seeds, stars, etc., dies having these forms are used. The dough is rolled thin and then the figures are stamped out just as is done in the manufacture of crackers.

The drying of macaroni requires the most expert skill and judgment. It is the most important, the most difficult, and the most delicate operation in the whole process of the manufacture of macaroni products, and upon it largely depends the quality of the finished product.

In Italy macaroni is often dried in the sunshine, in the open air, especially when the product is made in the small plants. Generally, a preliminary drying of about 2 hours duration is necessary to prevent souring and to keep to short-cut products from sticking together. Very soon after the paste emerges from the die or "trafila" and while still warm, there is formed a crust upon the surface. This superficial drying or hardening is arrested and eliminated by placing the product in a closed humid cabinet or room. As a result of this treatment the moisture content tends to become flexible again, i. e. it "comes back". This process of "hardening or drying" and of "coming back or becoming again flexible" is carried on alternately. The paste is then removed from the damp room and allowed to dry completely in the open air under Italy's sunny skies. After the macaroni is thoroughly dried in the open air it is transferred to a closed but well ventilated room .where it is allowed to "rest" for several hours, after which it is again placed in the open air for 5 or 6 hours and once more allowed to "rest". It is then ready to be packed. Open air drying requires, therefore, considerable supervision. It is generally believed that during the first day a sort of fermentation takes place which produces the much desired flavor.

When the weather does not allow the macaroni to be dried in the open, the alternate "hardening" and "softening" is conducted in specially constructed ventilated cabinets in which the drying is completed. The alternate "drying" and "resting" is for the purpose of preventing warping, as the outer part of the macaroni dries faster than the inner portion.

Although out-of-doors drying is now considered unhygienic and obsolete as it exposes the product to all kind of germ-laden dust, it should be remembered that no macaroni is eaten raw. It is generally boiled for at least 10 minutes. Sound macaroni products that have been boiled can no doubt be considered safe in this respect.

In the modern plants, practically all macaroni products are dried in specially constructed drying rooms through which a current of filtered air is blown by means of fans. The air laden with moisture from the macaroni is thus being continually replaced by clean dry air. The temperature of the drying room ranges from 70° to 100° F. The rate of drying depends, not so much on high temperature as it does on air-intake or circulation. In other words, the proper drying of macaroni depends upon correct ventilation, the temperature and hygroscopicity of the air being taken into consideration. During the drying, macaroni should not be exposed to sudden changes in temperature as this may also cause the product to warp.

A preliminary drying of about 2 hours duration is considered necessary to prevent the development of mould. The macaroni is then placed in a damp chamber in order to make uniform the moisture content throughout the product and to develop the flavor characteristic of good macaroni. After this preliminary treatment the macaroni is transferred to the drying chambers.

The long-cut macaroni is hung on sticks or canes and placed in the drying chambers. Sometimes the canes laden with macaroni are hung on a truck, which is then wheeled into the drying chamber.

Short-cut macaroni products are spread out evenly on trays. These are sometimes placed on trucks, which are wheeled into the drying chamber, or a combination of trays may form part of a drying room.

The drying proper takes from 36 to 90 hours, depending upon the efficiency of the process and the nature of the product. It is not advisable to dry macaroni too quickly, as too rapid curing fails to develop the desired flavor and produces a product which, because of the uneven distribution of moisture throughout the mass, may crack or check, or split. A well-cured macaroni should bend somewhat like a whip. It is this elastic property which causes the macaroni to retain its form after being cooked.

WHAT KIND OF WINTER WILL IT BE?

To answer the question, "What kind of weather will we have next winter?" is an entirely different kind of problem from saying what the weather will be like in New York or Boston tomorrow or the next day.

What makes the problem so different?

First, the daily forecast envisions the continuing development over a short period of time of a relatively local (perhaps nationwide) weather pattern. The detailed initial conditions and the important ingredients are well known by current observation. More distant conditions or physical factors do not have time to make their influences felt within a day or two.

The seasonal forecast, on the other hand, is intimately dependent on the complete hemispheric weather pattern. The average state or the nature of the weather activity in one locality can be treated in no sense as an isolated local problem. Furthermore, there is no continuous progressive development of a large-scale pattern over longer periods as there is over a short term locally.

Specific local forecasts of the day-to-day weather changes can be made almost scientifically for a day or two ahead. However, the accuracy of such forecasts falls off rapidly with time, as more distant conditions and weather control factors begin to make their effects felt in relatively unknown ways. No one to date has demonstrated ability to forecast local day-to-day weather changes with any significant degree of skill more than four or five days ahead. It follows that monthly or seasonal forecasts cannot accurately predict short-period weather fluctuations.

No "scientific" seasonal forecasting is possible. There is not even general agreement among meteorologists as to what long-term "weather control factors" make the large-scale weather behavior patterns of one winter or one summer season so different from another, let alone how to incorporate the understanding of any such factors into "scientific" seasonal forecasting.

For a long time to come, seasonal weather forecasts will be based on a statistical or pattern analogy. This is done by comparing the cur-rent and recent behavior of the general circulation and weather on a hemispheric scale with similar behavior in corresponding periods in the past. We thus have empirical as opposed to scientific forecasting. However, insofar as the basic long-term weather "control" factors can be surmised or identified, the current state of these factors can be included, at least in a qualitative sense, in the comparative analogy.

The seasonal forecasts themselves are bound to be general and broad, in the form of mean states or trends of the weather activity over large geographical areas without much local detail. Only if the selected analog of the current period shows good similarity in the month-to-month progression of the large-scale weather behavior patterns is there any basis on which to hazard a guess as to the probable month-to-month progression of the large-scale seasonal weather abnormalities or trends.

We are working on the solar-weather analog approach to the preparation of a weather forecast for the two or three seasons ahead. Three basic steps are involved:

1. The selection of the two or three years in the past when the behavior of the hemispheric general circulation and weather patterns-were most similar to (best analogs of) the currently terminating season and the two preceding seasons.

2. The comparison of solar (sunspot) activity and any other possible long-term factors of "weather control" during the current year with the behavior of the same factors during the two or three selected analog years, to narrow down the choice to that of a single best analog year for the current year.

3. Use of the large-scale weather patterns during the following two or three seasons of the selected analog year as an indication of the expected development during the current year. Modifications of the development of the analog year may be made on the basis of important differences of the current large-scale weather patterns, or of solar activity, or of other possible control factors from the corresponding conditions in the analog year.

We select the two or three analog years which correspond best to the current year in the large-scale weather patterns of the northern hemisphere. This is the most important step in the analog selection. We have on file the complete northern hemispheric seasonal mean charts of departure from normal of sea-level pressure, of upper-level pressure and of temperature for every season from 1899 to the present. These charts give a complete picture, season by season, of the abnormalities of the northern hemispheric patterns of wind circulation and of temperature for the 63-year period.

As soon as the corresponding departure from normal patterns of the