Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Racheeva_text.doc
Скачиваний:
0
Добавлен:
01.07.2025
Размер:
341.5 Кб
Скачать

Материалы для самостоятельной работы

WHAT IS PHYSICS

Physics is considered to be the most basic of the natural sciences. It deals with the fundamental constituents of matter and their interactions as well as the nature of atoms and the build-up of molecules and condensed matter. It tries to give unified description of the behavior of matter as well as of radiation, covering as many types of phenomena as possible. In some of its applications it comes close to the classical areas of chemistry, and in others there are clear connections to the phenomena traditionally studied by astronomers. Present trends are even pointing toward a closer approach of some areas of physics and microbiology.

THE ELECTROMAGNETIC SPECTRUM

Different forms of energy spread across a range called the electromagnetic spectrum. Energy forms in this spectrum have both electrical and magnetic characteristics. They travel as electromagnetic waves. All waves have wavelength and frequency. A wave has an uppermost crest and a bottommost trough.

Wavelength is the distance between the crest of one wave and the next (or between the trough of one wave and the next). Wavelength may be expressed in millimicrones. Frequency is the number of waves that pass a given point in a given time. Frequency is expressed in hertz, or cycles per second. An inverse relationship exists in the electromagnetic spectrum. As the wavelengths of energy forms grow longer, their frequencies diminish. Gamma rays have the shortest wavelengths and the highest frequencies; long radio waves have the longest wavelengths and the lowest frequencies. We can directly sense only a small portion of the electromagnetic spectrum. We can see visible light and feel the heat of infrared rays. Other forms require instruments that convert the energy into perceptible forms, such as gamma ray counters or radio receivers.

THE ROBOT’S DESIGN

What are industrial robots and how do they work? Although they vary widely in shape, size and capability, industrial robots are made up of several basic components: the manipulator, the control and the power supply.

The manipulator is the mechanical device, which actually performs the useful functions of the robot. It is a hydraulically, pneumatically or electrically driven jointed mechanism capable of number independent coordinated motions. Feedback devices on the manipulator’s joints or actuators provide information regarding its motions and positions to the robot control. A gripping device or tool, designed for the specific tasks to be done by the robot, is mounted on the outermost joint of the manipulator. Its function is directed by the robots control system.

The control stores the desired motions of the robot and their sequence in its memory; directs the manipulator through this sequence or “program” upon command; and interacts with the machines, conveyors and tools with which the robot works. Controls range in complexity from simple stepping switches to minicomputers.

GENERATING X-RAYS

X-rays are forms of radiation higher on the electro-magnetic spectrum than closely related ultraviolet waves. X-rays have great penetrating power because their short wavelength and high frequency let them travel easily between the atoms of a substance. X-rays are emitted from many sources in the universe. They can also be generated for medical and industrial uses. When photographic film is placed behind an object being X-rayed, the developed roentgenogram reveals a shadow picture of the object. For instance, when a hand is X-rayed, the roentgenogram shows the bones of the hand as white shapes against a black background. This is because X-rays do not penetrate the dense flesh and thus do not expose (darken) the areas of the film covered by the bones. X-rays can be produced by high-vacuum X-ray tubes.

Such tubes consist of an airtight glass container with two electrodes – one positive and one negative – sealed inside. The cathode, or negative electrode, has a small coil of wire. The anode, or positive electrode, consists of a block of metal. An electric current flows through the cathode, causing it to become extremely hot. The heat releases electrons from the cathode. At the same time, a high voltage is applied across the cathode and the anode. This voltage forces the electrons to travel at high speeds towards the tungsten target. When the electrons strike the target, X-rays are produced.

HISTORY OF THE COMPUTER

For a long time a man has been looking for ways of increasing the speed of computations. The history of computers starts out about 3000 B.C. at the birth of the abacus, a wooden rack holding two horizontal wires with beads which are moved around according to programming rules memorized by the user, so all regular arithmetic problems can be done. It is still in existence and used by some part of the world’s population. It made valuable contributions, including positional notation. Another important invention around the same time was the Astrolabe, used for navigation.

The achievements in this field which step by step led to the computer as we know it today include such names as Napier (1612) – the inventor of logarithms; Pascal (1642) – the creator of the first gear-driven calculating machine. It added numbers entered with dials. Calculating devices in use today closely resemble Pascal’s machine.

In 1671 Gottfried Wilhelm von Leibniz improved on Pascal’s machine. He invented a special mechanism, which is still used in many modern day calculators.

Ch.X. Thomas created the first successful mechanical calculator that could add, subtract, multiply and divide. Jacquard (1801) developed the punched-card principle followed by Hollerith’s (1800) “unit record” principle by which data were coded and represented by holes in cards. He developed an automatic sorting machine, a cardpunch machine and semiautomatic tabulating machine. He organized “The Tabulating Machine Company” which with some other companies became the International Business Machines Corporation in 1924 (the famous IBM).

By 1890 the range of improvements included accumulation of partial results, storage and automatic reentry of past results (a memory function), printing of the results.

Ch. Babbage (1850) a mathematics professor in Cambridge constructed large-scale calculating machines when he realized that many long calculations were really a series of predictable actions that were constantly repeated. He called his automatic mechanical calculating machine a difference machine. The difference machine was really a great advance. Babbage continued to work on it for 10 years but then he started to work at the construction of a fully program-controlled, automatic mechanical digital computer. He called this idea an Analytical Engine, but failed because the necessary parts couldn’t be manufactured precisely in his time. Despite failures, his work made a valuable contribution to the later engineering of calculating machines.

Between 1850 and 1900 great advances were made in mathematical physics and it came to be known that most observable dynamic phenomena can be identified by different equations (which meant that most events occurring in nature can be measured or described in one equation or another).

HISTORY OF PROGRAMMING LANGUAGES

Programming language is a composition of vocabulary and set of grammatical rules for instructing a computer to perform specific tasks. Each language has a unique set of keywords (words that it understands) and a special syntax for organizing program instructions.

Machine languages are the languages that the computer actually understands. They are the least complex and the closest to computer hardware programming languages.

They consist entirely of numbers, and only numbers, — memory addresses and operation codes. Each different type of CPU (Central Processing Unit) has its own unique machine language.

Lying between machine languages and high-level languages are languages called assembly languages.

Assembly languages, or assemblers are similar to machine languages, but they are much easier to program in because they allow a programmer to substitute names for numbers: ones and zeros and enable them to use meaningful names for instructions. In fact, the first assembler was simply a system for representing machine instructions with simple mnemonics.

But most often the term programming language refers to high-level languages, such as BASIC, C, C++, COBOL, FORTRAN, Ada, Pascal, etc.

High-level programming languages are more complex than assemblers and much more complex than machine languages. They all fall into two major categories: imperative languages and declarative languages.

Imperative languages describe computation in terms of a program state and statements that change the program state. Imperative programs are a sequence of commands for the computer to perform.

The earliest imperative languages were the machine languages of the original computers. In these languages, instructions were very simple. FORTRAN, Formula translation developed at IBM starting in 1954, was a compiled language that allowed named variables complex expressions, subprograms, and many other features now common in imperative languages.

Declarative programming languages stand in contrast to imperative languages.

Whereas imperative languages give the computer a list of instructions to execute in a particular order, declarative programming describes to the computer a set of conditions and relationships between variables, and then the language executor (an interpreter or compiler) applies a fixed algorithm to these relations to produce a result.

The advantage of declarative languages is that programs written in them are closer to the program specification. Programming, therefore, is at a higher level than in them imperative languages.

NO WORMS IN THESE APPLES

Apple Computers may not have ever been considered as the state of art in

Artificial Intelligence, but a second look should be given. Not only are today's PCs becoming more powerful but AI influence is showing up in them. From Macros to Voice Recognition technology, PCs are becoming our talking buddies. Who else would go surfing with you on short notice- even if it is the net. Who else would care to tell you that you have a business appointment scheduled at 8:35 and 28 seconds and would notify you about it every minute till you told it to shut up. Even with all the abuse we give today's PCs, they still plug away to make us happy. We use PCs more not because they do more or are faster but because they are getting so much easier to use. And their ease of use comes from their use of AI.

Speech Recognition. You tell the computer to do what you want without it having to learn your voice. This implication of AI in Personal computers is still very crude but it does work.

Script recognition. Cursive or Print can be recognized by notepad sized devices. With the pen that accompanies your silicon note pad you can write a little note to yourself which magically changes into computer text if desired. Your computer can read your handwriting. If it can't read it though- perhaps in the future, you can correct it by dictating your letters instead.

Your computer does faster what you could do more tediously. You have taught the computer to do something only by doing it once. In businesses, many times applications are upgraded. But the files must be converted. All of the businesses records but be changed into the new software's type. Macros save the work of conversion of hundred of files by a human by teaching the computer to mimic the actions of the programmer thus teaching the computer a task that it can repeat whenever ordered to do so.

AI is all around us. Don't think the change will be harder on us because AI has been developed to make our lives easier.

COMPUTER-INTEGRATED MANUFACTURING

Since about 1970 there has been a growing trend in manufacturing firms toward the use of computers to perform many of the functions related to design and production. The technology associated with this trend is called CAD/CAM, for computer-aided design and computer-aided manufacturing. Today it is widely recognized that the scope of computer applications must extend beyond design and production to include the business functions of the firm. The name given to this more comprehensive use of computers is computer-integrated manufacturing (CIM).

CAD/CAM is based on the capability of a computer system to process, store, and display large amounts of data representing part and product specifications. For mechanical products, the data represent graphic models of the components; foe electrical products, they represent circuit information; and so forth. CAD/CAM technology has been applied in many industries, including machined components, electronic products, and equipment design and fabrication for chemical processing.

CAD/CAM involves not only the automation of the manufacturing operations but also the automation of elements in the entire design-and-manufacturing procedure.

Computer-aided design (CAD) makes use of computer systems to assist in the creation modification, analysis, and optimization of a design. The designer, working with CAD system rather than the traditional drafting board, creates the lines and surfaces that form the object (product, part, structure, etc.) and stores this model in the computer database. By invoking the appropriate CAD software, the designer can perform various analyses on the object, such as heat transfer calculations. The final object design is developed as adjustments are made on the basis of these analyses.

Once the design procedure has been completed, the computer-aided design system can generate the detailed drawings required to make the object.

Computer-aided manufacturing (CAD) involves the use of computer systems to assist in the planning, control, and management of production operations. This is accomplished by either direct or indirect connections between the computer and production operations. In the case of the direct connection, the computer is used to monitor or control the processes in the factory. Computer process monitoring involves the collection of data from the factory, the analysis of the data, and the communication of process-performing results to plant management. These measures increase the efficiency of plant operations. Computer process control entails the use of the computer system to execute control actions to operate the plant automatically, as described above. Indirect connections between the computer system and the process involve applications in which the computer supports the production operations without actually monitoring or controlling them. These applications include planning and management functions that can be performed by the computer (or by humans working with the computer) more efficiently than by humans alone.

Examples of these functions are planning the step-by-step processes for product, part programming in numerical control, and scheduling the production operations in the factory.

Computer-integrated manufacturing includes all the engineering functions of CAD/CAM and the business functions of the firm as well. These business functions include order entry, cost accounting, employee time records and payroll, and customer billing. In an ideal CIM system, computer technology is applied to all the operational and information-processing functions of the company, from customer orders through design and production (CAD/CAM) to product shipment and customer service. The scope of the computer system includes all activities that are concerned with manufacturing. In many ways, CIM represents the highest level of automation in manufacturing.

INCREASE YOUR KNOWLEDGE OF COMPUTER VIRUSES

What is a virus?

In 1983, researcher Fred Cohen defined a computer virus as “a program that can “infect” other programs by modifying them to include a version of itself.” This means that viruses copy themselves, usually by encryption* or by mutating** slightly each time they copy. There are several types of viruses, but the ones that are the most dangerous are designed to corrupt your computer or software programs. Viruses can range from an irritating*** message flashing on your computer screen to eliminating data on your hard drive. Viruses often use your computer's internal clock as a trigger.

Some of the most popular dates used are Friday the 13th and famous birthdays. It is important to remember that viruses are dangerous only if you execute (start) an infected program. There are three main kinds of viruses. Each kind is based on the way the virus spreads.

1. Boot Sector Viruses - These viruses attach themselves to floppy disks and then copy themselves into the boot sector of your hard drive. (The boot sector is the set of instructions your computer uses when it starts up.) When you start your computer (or reboot it; your hard drive gets infected. You can get boot sector viruses only from an infected floppy disk. You cannot get one from sharing files or executing programs.

This type of virus is becoming less common because today's computers do not require a boot disk to start, but they can still be found on disks that contain other types of files.

One of the most common boot sector viruses is called "Monkey”, also known as "Stoned".

2. Program Viruses - These viruses (also known as traditional file viruses) attach themselves to programs' executable files. They can infect any file that your computer runs when it launches a program. When you start a program that contains a virus, the virus usually loads into your computer's memory. When the virus is in your computer's memory, it can infect any other program that is started. Program viruses that have circulated recently are "SKA" and "Loveletter."

3. Macro Viruses - These viruses attach themselves to templates that are used to create documents or spreadsheets. Once a template is infected, every document or spreadsheet you open using that program also will become infected. Macro viruses are widespread because they infect commonly used office applications and spread between PCs and Macintoshes. Macro viruses include “Concept”, “Melissa”, and “Have a Nice Day”.

NANOELECTRONICS

Nanoelectronics refer to the use of nanotechnology on electronic components, especially transistors. Although the term nanotechnology is generally defined as utilizing technology less than 100 nm in size, nanoelectronics often refer to transistor devices that are so small that inter-atomic interactions and quantum mechanical properties need to be studied extensively. As a result, present transistors do not fall under this category, even though these devices are manufactured under 45 nm or 32 nm technology.

Nanoelectronics are sometimes considered as disruptive technology because present candidates are significantly different from traditional transistors. Some of these candidates include: hybrid molecular/semiconductor electronics, one dimensional nanotubes/ nanowires, or advanced molecular electronics.

Although all of these hold promise for the future, they are still under development and will most likely not be used for manufacturing any time soon.

Fundamental concepts

The volume of an object decreases as the third power of its linear dimensions, but the surface area only decreases as its second power. This somewhat subtle and unavoidable principle has huge ramifications. For example the power of a drill (or any other machine) is proportional to the volume, while the friction of the drill's bearings and gears is proportional to their surface area. For a normal-sized drill, the power of the device is enough to handily overcome any friction. However, scaling its length down by a factor of 1000, for example, decreases its power by 10003 (a factor of a billion) while reducing the friction by only 10002 (a factor of "only" a million). Proportionally it has 1000 times less power per unit friction than the original drill. If the original friction-to-power ratio was, say, 1%, that implies the smaller drill will have 10 times as much friction as power. The drill is useless.

For this reason, while super-miniature electronic integrated circuits are fully functional, the same technology cannot be used to make working mechanical devices beyond the scales where frictional forces start to exceed the available power. So even though you may see microphotographs of delicately etched silicon gears, such devices are currently little more than curiosities with limited real world applications, for example, in moving mirrors and shutters. Surface tension increases in much the same way, thus magnifying the tendency for very small objects to stick together. This could possibly make any kind of "micro factory" impractical: even if robotic arms and hands could be scaled down, anything they pick up will tend to be impossible to put down. The above being said, molecular evolution has resulted in working cilia, flagella, muscle fibers and rotary motors in aqueous environments, all on the nanoscale. These machines exploit the increased frictional forces found at the micro or nanoscale. Unlike a paddle or a propeller which depends on normal frictional forces (the frictional forces perpendicular to the surface) to achieve propulsion, cilia develop motion from the exaggerated drag or laminar forces (frictional forces parallel to the surface) present at micro and nano dimensions. To build meaningful "machines" at the nanoscale, the relevant forces need to be considered. We are faced with the development and design of intrinsically pertinent machines rather than the simple reproductions of macroscopic ones.

All scaling issues therefore need to be assessed thoroughly when evaluating nanotechnology for practical applications.

Approaches to nanoelectronics

Molecular electronics

Single molecule devices are another possibility. These schemes would make heavy use of molecular self-assembly, designing the device components to construct a larger structure or even a complete system on their own. This can be very useful for reconfigurable computing, and may even completely replace present FPGA technology.

Molecular electronics is a new technology which is still in its infancy, but also brings hope for truly atomic scale electronic systems in the future. One of the more promising applications of molecular electronics was proposed by the IBM researcher Ari Aviram and the theoretical chemist Mark Ratner in their 1974 and 1988 papers Molecules for Memory, Logic and Amplification, (see Unimolecular rectifier) . This is one of many possible ways in which a molecular level diode / transistor might be synthesized by organic chemistry. A model system was proposed with a spiro carbon structure giving a molecular diode about half a nanometre across which could be connected by polythiophene molecular wires. Theoretical calculations showed the design to be sound in principle and there is still hope that such a system can be made to work.

Nanoelectronic devices

Computers

Simulation result for formation of inversion channel (electron density) and attainment of threshold voltage (IV) in a nanowire MOSFET. Note that the threshold voltage for this device lies around 0.45V.

Nanoelectronics holds the promise of making computer processors more powerful than are possible with conventional semiconductor fabrication techniques. A number of approaches are currently being researched, including new forms of nanolithography, as well as the use of nanomaterials such as nanowires or small molecules in place of traditional CMOS components. Field effect transistors have been made using both semiconducting carbon nanotubes and with heterostructured semiconductor nanowires.

Energy production

Research is ongoing to use nanowires and other nanostructured materials with the hope to create cheaper and more efficient solar cells than are possible with conventional planar silicon solar cells. It is believed that the invention of more efficient solar energy would have a great effect on satisfying global energy needs.

There is also research into energy production for devices that would operate in vivo, called bio-nano generators. A bio-nano generator is a nanoscale electrochemical device, like a fuel cell or galvanic cell, but drawing power from blood glucose in a living body, much the same as how the body generates energy from food. To achieve the effect, an enzyme is used that is capable of stripping glucose of its electrons, freeing them for use in electrical devices. The average person's body could, theoretically, generate 100 watts of electricity (about 2000 food calories per day) using a bio-nano generator. However, this estimate is only true if all food was converted to electricity, and the human body needs some energy consistently, so possible power generated is likely much lower. The electricity generated by such a device could power devices embedded in the body (such as pacemakers), or sugar-fed nanorobots. Much of the research done on bio-nano generators is still experimental, with Panasonic's Nanotechnology Research Laboratory among those at the forefront.

New material for nanoscale computer chips 

Alternative to silicon computers

Today the foundation of our computers, mobile phones and other electronic apparatus is silicon transistors. A transistor is in principal an on- and off- contact and there are millions of tiny transistors on every computer chip. However, we are reaching the limit for how small we can make transistors out of silicon.

We already use various organic materials in, for example, flat screens, such as OLED (Organic Light Emitting Diode). The new results show how small and advanced devices made of organic materials can become.

Thomas Bjørnholm, Director of the Nano-Science Center, Department of Chemistry at University of Copenhagen explains:

"We have succeeded in placing several transistors consisting of nanowires together on a nano device. It is a first step towards realization of future electronic circuitry based on organic materials - a possible substitute for today's silicon-based technologies. This offers the possibility of making computers in different ways in the future."

Intel's single-chip cloud computer

Chief technology officer with Intel, Justin Rattner, said the chip comprises 1.3-billion transistors arranged in a network of 24 tiles, each of which has two Pentium-class IA-32 cores, two L2 caches, plus a router to enable communications between cores. The system uses new software applications to control the power consumed by the cores, and to rapidly transfer data between the cores. This means data can go directly between cores without needing to go via the main memory, and this cuts the data transfer speed by 15 times. The software prevents the data being corrupted by instructing the cache sending the data to delete its copy after it is sent, and the receiving cache to delete old copies of the data before receiving the new.

The software controlling power consumption allows application developers, rather than the operating system, to decide how power consumed by the cores is controlled. The tiles can all be independently controlled, which means the power consumption on some cores can be reduced to as low as 25 watts, while others can be up to 125 watts. While some developers are not yet sure what they will do with the feature, many are interested in learning more.

Intel’s director of advanced microprocessor research, Nitin Borkar, said tasks could be programmed to run at greater power efficiency rather than higher power if appropriate, or individual cores could be throttled back after they have finished their computations. This would give the system the “compute on demand” feature of traditional data centers.

Intel Labs are forming partnerships with industry and university researchers and producing 100 of the chips to enable research to refine the chip architecture and maximize its usefulness.

Understanding mechanical properties of silicon nanowires paves way for nanodevices

Silicon nanowires are attracting significant attention from the electronics industry due to the drive for ever-smaller electronic devices, from cell phones to computers. The operation of these future devices, and a wide array of additional applications, will depend on the mechanical properties of these nanowires. New research from North Carolina State University shows that silicon nanowires are far more resilient than their larger counterparts, a finding that could pave the way for smaller, sturdier nanoelectronics, nanosensors, light-emitting diodes and other applications.  

It is no surprise that the mechanical properties of silicon nanowires are different from "bulk" - or regular size - silicon materials, because as the diameter of the wires decrease, there is an increasing surface-to-volume ratio. Unfortunately, experimental results reported in the literature on the properties of silicon nanowires have reported conflicting results. So the NC State researchers set out to quantify the elastic and fracture properties of the material.

"The mainstream semiconductor industry is built on silicon," says Dr. Yong Zhu, assistant professor of mechanical engineering at NC State and lead researcher on this project. "These wires are the building blocks for future nanoelectronics." For this study, researchers set out to determine how much abuse these silicon nanowires can take. How do they deform - meaning how much can you stretch or warp the material before it breaks? And how much force can they withstand before they fracture or crack? The researchers focused on nanowires made using the vapor-liquid-solid synthesis process, which is a common way of producing silicon nanowires.

Zhu and his team measured the nanowire properties using in-situ tensile testing inside scanning electron microscopy. A nanomanipulator was used as the actuator and a micro cantilever used as the load sensor. "Our experimental method is direct but simple," says Qingquan Qin, a Ph.D. student at NC State and co-author of the paper. "This method offers real-time observation of nanowire deformation and fracture, while simultaneously providing quantitative stress and strain data. The method is very efficient, so a large number of specimens can be tested within a reasonable period of time."

As it turns out, silicon nanowires deform in a very different way from bulk silicon. "Bulk silicon is very brittle and has limited deformability, meaning that it cannot be stretched or warped very much without breaking." says Feng Xu, a Ph.D. student at NC state and co-author of the paper, "But the silicon nanowires are more resilient, and can sustain much larger deformation. Other properties of silicon nanowires include increasing fracture strength and decreasing elastic modulus as the nanowire gets smaller and smaller."

The fact that silicon nanowires have more deformability and strength is a big deal. "These properties are essential to the design and reliability of novel silicon nanodevices," Zhu says. "The insights gained from this study not only advance fundamental understanding about size effects on mechanical properties of nanostructures, but also give designers more options in designing nanodevices ranging from nanosensors to nanoelectronics to nanostructured solar cells."

Список использованных источников:

  1. Infotech. English for computer users. – G.B. Cambridge University Press, 2006.

  2. Гольцова, Е.В. English for PC Users. Английский язык для пользователей ПК и программистов / Е.В. Гольцова. – СПб., Корона принт, «Бином пресс». – 2004.

  3. www.helixmaterial.com

  4. www.appnano.com

  5. http://www.nanorobotdesign.com

29

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]