The Sixth Sense

Sixth_Sense2SixthSense is a gestural interface device comprising a neckworn pendant that contains both a data projector and camera. Headworn versions were also built at MIT Media Lab in 1997 that combined cameras and illumination systems for interactive photographic art, and also included gesture recognition (e.g. finger-tracking using colored tape on the fingers).
SixthSense is a name for extra information supplied by a wearable computer, such as the device called “WuW” (Wear yoUr World) by Pranav Mistry et al., building on the concept of the Telepointer, a neckworn projector and camera combination first proposed and reduced to practice by MIT Media Lab student Steve Mann.

The SixthSense prototype is comprised of a pocket projector, a mirror and a camera. The hardware components are coupled in a pendant like mobile wearable device. Both the projector and the camera are connected to the mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks user’s hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tip of the user’s fingers using simple computer-vision techniques. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. The maximum number of tracked fingers is only constrained by the number of unique fiducials, thus SixthSense also supports multi-touch and multi-user interaction.

The SixthSense prototype implements several applications that demonstrate the usefulness, viability and flexibility of the system. The map application lets the user navigate a map displayed on a nearby surface using hand gestures, similar to gestures supported by Multi-Touch based systems, letting the user zoom in, zoom out or pan using intuitive hand movements. The drawing application lets the user draw on any surface by tracking the fingertip movements of the user’s index finger. SixthSense also recognizes user’s freehand gestures (postures). For example, the SixthSense system implements a gestural camera that takes photos of the scene the user is looking at by detecting the ‘framing’ gesture. The user can stop by any surface or wall and flick through the photos he/she has taken. SixthSense also lets the user draw icons or symbols in the air using the movement of the index finger and recognizes those symbols as interaction instructions. For example, drawing a magnifying glass symbol takes the user to the map application or drawing an ‘@’ symbol lets the user check his mail. The SixthSense system also augments physical objects the user is interacting with by projecting more information about these objects projected on them. For example, a newspaper can show live video news or dynamic information can be provided on a regular piece of paper. The gesture of drawing a circle on the user’s wrist projects an analog watch. http://www.youtube.com/watch?v=YrtANPtnhyg

By teenstudents Posted in tech

3D cinemas

A 3D film or S3D (stereoscopic 3D) films are a motion picture that enhances the illusion of depth perception. It’s derived from stereoscopic photography. A regular motion picture camera system is used to record the images as seen from two perspective (or computer-generated imagery generates the two perspective in post-production) , and special projection hardware and/or eyewear are used to provide the illusion of depth when viewing the film. 3Dfilms are not limited to feature film theatrical releases; television broadcasts and direct-to-video films have also incorporated similar methods, especially since 3D television and Blu-ray 3D.
3D films have existed in some form since 1915, but had been largely relegated to a niche in motion picture industry because of the costly hardware and process required to produce and display a 3D film, and the lack of a standardized format for all segments of the entertainment business. Nonetheless, 3D films were prominently featured in 1950s in American cinema, and later experienced a worldwide resurgence in 1980s and 1990s driven by IMAX high-end theaters and Disney themed-venues. 3D films became more and more successful throughout the 2000s, culminating in the unprecedented success of 3D presentation of avatar in December 2009 and January 2010.

A Big Solution To The Entire World

 If we burn the plastic, we generate toxins and a large amount of CO2. If we convert it into oil, we save CO2 and at the same time increase people’s awareness about the value of plastic garbage.—Akinori Ito, CEO of Blest.

for more details click here to visit our world 2.0

Android

Android is an operating system for mobile devices such as smartphones and tablet computers. It is developed by the Open Handset Alliance led by Google.

Google purchased the initial developer of the software, Android Inc., on August 17, 2005. The unveiling of the Android distribution on November 5, 2007 was announced with the founding of the Open Handset Alliance, a consortium of 84 hardware, software, and telecommunication companies devoted to advancing open standards for mobile devices. Google released most of the Android code under the Apache License, a free software license.The Android Open Source Project (AOSP) is tasked with the maintenance and further development of Android.

Android consists of a kernel based on the Linux kernel, with middleware, libraries and APIs written in C and application software running on an application framework which includes Java-compatible libraries based on Apache Harmony. Android uses the Dalvik virtual machine with just-in-time compilation to run Dalvik dex-code (Dalvik Executable), which is usually translated from Java bytecode.

Android has a large community of developers writing applications (“apps”) that extend the functionality of the devices. Developers write primarily in a customized version of Java. As of October 2011 there were more than 300,000 apps available for Android, and the estimated number of applications downloaded from the Android Market as of December 2011 exceeded 10 billion.Apps can be downloaded from third-party sites or through online stores such as Android Market, the app store run by Google.

Android was listed as the best-selling smartphone platform worldwide in Q4 2010 by Canalys with over 200 million Android devices in use by November 2011,across the several versions of the operating system. As of December 2011 there are over 700,000 Android devices activated every day.

Android 4.0 (Ice Cream Sandwich) is the latest version of the 2011 Android platform for  phones, tablets, and more. It builds on the things people love most about Android — easy   multitasking, rich notifications, customizable home screens, resizable widgets, and deep   interactivity — and adds powerful new ways of communicating and sharing.

Input and Output devices

In Computers, input/output, or I/O, refers to the communication between an information processing system (such as a computer), and the outside world, possibly a human, or another information processing system. Inputs are the signals or data received by the system, and outputs are the signals or data sent from it. The term can also be used as part of an action; to “perform I/O” is to perform an input or output operation. I/O devices are used by a person (or other system) to communicate with a computer. For instance, a keyboard or a mouse may be an input device for a computer, while monitors and printers are considered output devices for a computer. Devices for communication between computers, such as modems and network cards, typically serve for both input and output.

Note that the designation of a device as either input or output depends on the perspective. Mouse and keyboards take as input physical movement that the human user outputs and convert it into signals that a computer can understand. The output from these devices is input for the computer. Similarly, printers and monitors take as input signals that a computer outputs. They then convert these signals into representations that human users can see or read. For a human user the process of reading or seeing these representations is receiving input. These interactions between computers and humans is called human–computer interaction.

In computer architecture, the combination of the CPU and main memory (i.e. memory that the CPU can read and write to directly, with individual instructions) is considered the brain of a computer, and from that point of view any transfer of information from or to that combination, for example to or from a disk drive, is considered I/O. The CPU and its supporting circuitry provide memory-mapped I/O that is used in low-level computer programming in the implementation of device drivers. An I/O algorithm is one designed to exploit locality and perform efficiently when data reside on secondary storage, such as a disk drive.

Steve Jobs

Steven Paul “Steve” Jobs (February 24, 1955 – October 5, 2011) was an American inventor and entrepreneur. He was co-founder, chairman, and chief executive officer of Apple Inc. Jobs was co-founder and previously served as chief executive of Pixar Animation Studios; he became a member of the board of directors of the Walt Disney Company in 2006, following the acquisition of Pixar by Disney.

In the late 1970s, Jobs — along with Apple co-founder Steve Wozniak, Mike Markkula and others — designed, developed, and marketed one of the first commercially successful lines of personal computers, the Apple II series. In the early 1980s, Jobs was among the first to see the commercial potential of Xerox PARC’s mouse-driven graphical user interface, which led to the creation of the Apple Lisa and, one year later, the Macintosh. After losing a power struggle with the board of directors in 1985, Jobs left Apple and founded NeXT, a computer platform development company specializing in the higher-education and business markets.

In 1986, he acquired the computer graphics division of Lucasfilm Ltd, which was spun off as Pixar Animation Studios. He was credited in Toy Story (1995) as an executive producer. He remained CEO and majority shareholder at 50.1 percent until its acquisition by The Walt Disney Company in 2006,making Jobs Disney’s largest individual shareholder at seven percent and a member of Disney’s Board of Directors. Apple’s 1996 buyout of NeXT brought Jobs back to the company he co-founded, and he served as its interim CEO from 1997, then becoming permanent CEO from 2000 onwards, spearheading the advent of the iPod, iPhone and iPad. From 2004, he fought a long battle with cancer,  eventually leading to his resignation as CEO in August 2011, during his third medical leave. After his resignation, Jobs was elected chairman of Apple’s board of directors.

On October 5, 2011, around 3:00 pm, Jobs died at his home in Palo Alto, California, aged 56, six weeks after resigning as CEO of Apple. A copy of his death certificate, which was made public on October 10, indicated respiratory arrest as the immediate cause of death, with “metastatic pancreas neuroendocrine tumor” as the underlying cause. His occupation was listed as “entrepreneur” in the “high tech” business. He was widely described as a visionary, pioneer and genius. According to a research, Steve Jobs did more than 70% of his innovative work in the last 7-8 years of his life i.e. after mid 2004.

Nanotechnology

 

 

 

Nanotechnology, the creation and use of materials or devices at extremely small scales. These materials or devices fall in the range of 1 to 100 nanometers (nm). One nm is equal to one-billionth of a meter (.000000001 m), which is about 50,000 times smaller than the diameter of a human hair. Scientists refer to the dimensional range of 1 to 100 nm as the nanoscale, and materials at this scale are called nanocrystals or nanomaterials.

The nanoscale is unique because nothing solid can be made any smaller. It is also unique because many of the mechanisms of the biological and physical world operate on length scales from 0.1 to 100 nm. At these dimensions materials exhibit different physical properties; thus scientists expect that many novel effects at the nanoscale will be discovered and used for breakthrough technologies.

A number of important breakthroughs have already occurred in nanotechnology. These developments are found in products used throughout the world. Some examples are catalytic converters in automobiles that help remove air pollutants, devices in computers that read from and write to the hard disk, certain sunscreens and cosmetics that transparently block harmful radiation from the Sun, and special coatings for sports clothes and gear that help improve the gear and possibly enhance the athlete’s performance. Still, many scientists, engineers, and technologists believe they have only scratched the surface of nanotechnology’s potential.

Nanotechnology is in its infancy, and no one can predict with accuracy what will result from the full flowering of the field over the next several decades. Many scientists believe it can be said with confidence, however, that nanotechnology will have a major impact on medicine and health care; energy production and conservation; environmental cleanup and protection; electronics, computers, and sensors; and world security and defense.

 

To grasp the size of the nanoscale, consider the diameter of an atom, the basic building block of matter. The hydrogen atom, one of the smallest naturally occurring atoms, is only 0.1 nm in diameter. In fact, nearly all atoms are roughly 0.1 nm in size, too small to be seen by human eyes. Atoms bond together to form molecules, the smallest part of a chemical compound. Molecules that consist of about 30 atoms are only about 1 nm in diameter. Molecules, in turn, compose cells, the basic units of life. Human cells range from 5,000 to 200,000 nm in size, which means that they are larger than the nanoscale. However, the proteins that carry out the internal operations of the cell are just 3 to 20 nm in size and so have nanoscale dimensions. Viruses that attack human cells are about 10 to 200 nm, and the molecules in drugs used to fight viruses are less than 5 nm in size.

The possibility of building new materials and devices that operate at the same scale as the basic functions of nature explains why so much attention is being devoted to the world below 100 nm. But 100 nm is not some arbitrary dividing line. This is the length at which special properties have been observed in materials—properties that are profoundly different at the nanoscale.

Human beings have actually known about these special properties for some time, although they did not understand why they occurred. Glassworkers in the Middle Ages, for example, knew that by breaking down gold into extremely small particles and sprinkling these fine particles into glass the gold would change in color from yellow to blue or green or red, depending on the size of the particle. They used these particles to help create the beautiful stained glass windows found in cathedrals throughout Europe, such as the cathedral of Notre Dame in Paris, France. These glassworkers did not realize it at the time, but they had created gold nanocrystals. At scales above 100 nm gold appears yellow, but at scales below 100 nm it exhibits other colors.

Nanotechnologists are intrigued by the possibility of creating human made devices at the molecular, or nanoscale, level. That is why the field is sometimes called molecular nanotechnology. Some nanotechnologists are also aiming for these devices to self-replicate—that is, to simultaneously carry out their function and increase their number, just as living organisms do. To some early proponents of the field, this aspect of nanotechnology is the most important. If tiny functional units could be assembled at the molecular level and made to self-replicate under controlled conditions, tremendous efficiencies could be realized. However, many scientists doubt the possibility of self-replicating nanostructures

  • APPROACHES TO NANOTECHNOLOGY

 

Scientists are currently experimenting with two approaches to making structures or devices at the scale of 1 to 100 nm. These methods are called the top-down approach and the bottom-up approach.

 

  • Top down approach

In the top-down process, technologists start with a bulk material and carve out a smaller structure from it. This is the process commonly used today to create computer chips, the tiny memory and logic units, also known as integrated circuits that operate computers. To produce a computer chip, thin films of materials, known as a mask, are deposited on a silicon wafer, and the unneeded portions are etched away. Almost all of today’s commercial computer chips are larger than 100 nm. However, the technology to create ever smaller and faster computer chips has already gone below 100 nm. Smaller and faster chips will enable computers to become even smaller and to perform many more functions more quickly.

The top-down approach, which is sometimes called micro fabrication or nanofabrication, uses advanced lithographic techniques to create structures the size of or smaller than current commercial computer chips. These advanced lithographic techniques include optical lithography and electron-beam (e-beam) lithography. Optical lithography currently can be used to produce structures as small as 100 nm, and efforts are being made to create even smaller features using this technique. E-beam lithography can create structures as small as 20 nm. However, e-beam lithography is not suitable for large-scale production because it is too expensive. Already the cost of building fabrication facilities for producing computer chips using optical lithography approaches several billion dollars.

Ultimately, the top-down approach to producing nanostructures is not only likely to be too costly but also technically impossible. Assembling computer chips or other materials at the nanoscale is unworkable for a fundamental reason. To reduce a material in a specifically designed way, the tool that is used to do the work must have a dimension or precision that is finer than the piece to be reduced. Thus, a machine tool must have a cutting edge finer than the finest detail to be cut. Likewise the lithographic mask used to etch away the locations on a silicon wafer must have a precision in its construction finer than the material to be removed. At the nanoscale, where the material to be removed could be a single molecule or atom, it is impossible to meet this condition.

  • Bottom-up approach.

As a result, scientists have become interested in another vastly different approach to creating structures at the nanoscale, known as the bottom-up approach. The bottom-up approach involves the manipulation of atoms and molecules to form nanostructures. The bottom-up approach avoids the problem of having to create an ever-finer method of reducing material to the nanoscale size. Instead, nanostructures would be assembled atom by atom and molecule by molecule, from the atomic level up, just as occurs in nature. However, assembly at this scale has its own challenges.

In school, children learn about some of these challenges when they study the random Brownian motion seen in particles suspended in liquids such as water. The particles themselves are not moving. Rather, the water molecules that surround the particles are constantly in motion, and this motion causes the molecules to strike the particles at random. Atoms also exhibit such random motion due to their kinetic energy. Temperature and the strength of the bonds holding the atoms in place determine the degree to which atoms move. Even in solids at room temperature—the chair you may be sitting on, for example—atoms move about in a process called diffusion. This ability of atoms to move about increases as a substance changes from solid to liquid to gas. If scientists and engineers are to successfully assemble at the atomic scale, they must have the means to overcome this type of behavior.

A clear example of such a challenge occurred in 1990 when scientists from the International Business Machines Corporation (IBM) used a scanning probe microscope tip to assemble individual xenon atoms so that they formed the letters IBM on a nickel surface. To prevent the atoms from moving away from their assigned locations, the nickel surface was cooled to temperatures close to absolute zero, the lowest temperature theoretically possible and characterized by the complete absence of heat. (Absolute zero is -273.15°C [-459.67°F].) At this low temperature, the atoms possessed very little kinetic energy and were essentially frozen.

Achieving this temperature, however, is impractical and uneconomical for the operation of commercial devices. Nevertheless, the ability of scientists to manipulate atoms was one of the first indications that the bottom-up approach might work. It also signaled the emergence of nanotechnology as an experimental science.

  • Future impact on Nanotechnology

Nanotechnology is expected to have a variety of economic, social, environmental, and national security impacts. In 2000 the National Science Foundation began working with the National Nanotechnology Initiative (NNI) to address nanotechnology’s possible impacts and to propose ways of minimizing any undesirable consequences.

For example, nanotechnology breakthroughs may result in the loss of some jobs. Just as the development of the automobile destroyed the markets for the many products associated with horse-based transportation and led to the loss of many jobs, transformative products based on nanotechnology will inevitably lead to a similar result in some contemporary industries. Examples of at-risk occupations are jobs manufacturing conventional televisions. Nanotechnology-based field-emission or liquid-crystal display (LCD), flat-panel TVs will likely make those jobs obsolete. These new types of televisions also promise to radically improve picture quality. In field-emission TVs, for example, each pixel (picture element) is composed of a sharp tip that emits electrons at very high currents across a small potential gap into a phosphor for red, green, or blue. The pixels are brighter, and unlike LCDs that lose clarity in sunlight, field-emission TVs retain clarity in bright sunlight. Field-emission TVs use much less energy than conventional TVs. They can be made very thin—less than a millimeter—although actual commercial devices will probably have a bit more heft for structural stability and ruggedness. Samsung claims it will be releasing the first commercial model, based on carbon nanotube emitters, by early 2004.

Other potential job losses could be those of supermarket cashiers if nanotechnology-based, flexible, thin-film computers housed in plastic product wrappings enable all-at-once checkout. Supermarket customers could simply wheel their carts through a detection gateway, similar in shape to the magnetic security systems found at the exits of stores today. As with any transformative technology, however, nanotechnology can also be expected to create many new jobs.

The societal impacts from nanotechnology-based advances in human health care may also be large. A ten-year increase in human life expectancy in the United States due to nanotechnology advances would have a significant impact on Social Security and retirement plans. As in the fields of biotechnology and genomics, certain development paths in nanotechnology are likely to have ethical implications.

Nanomaterials could also have adverse environmental impacts. Proper regulation should be in place to minimize any harmful effects. Because nanomaterials are invisible to the human eye, extra caution must be taken to avoid releasing these particles into the environment. Some preliminary studies point to possible carcinogenic (cancer-causing) properties of carbon nanotubes. Although these studies need to be confirmed, many scientists consider it prudent now to take measures to prevent any potential hazard that these nanostructures may pose. However, the vast majority of nanotechnology-based products will contain nanomaterials bound together with other materials or components, rather than free-floating nano-sized objects, and will therefore not pose such a risk.

At the same time, nanotechnology breakthroughs are expected to have many environmental benefits such as reducing the emission of air pollutants and cleaning up oil spills. The large surface areas of nanomaterials give them a significant capacity to absorb various chemicals. Already, researchers at Pacific Northwestern National Laboratory in Richland, Washington, part of the U.S. Department of Energy, have used a porous silica matrix with a specially functionalized surface to remove lead and mercury from water supplies.

Finally, nanotechnology can be expected to have national security uses that could both improve military forces and allow for better monitoring of peace and inspection agreements. Efforts to prevent the proliferation of nuclear weapons or to detect the existence of biological and chemical weapons, for example, could be improved with nanotech devices.

NANOTECHNOLOGY RESEARCH

Major centers of nanoscience and nanotechnology research are found at universities and national laboratories throughout the world. Many specialize in particular aspects of the field. Centers in nanoelectronics and photonics (the study of the properties of light) are found at the Albany Institute of Nanotechnology in Albany, New York; Cornell University in Ithaca, New York; the University of California at Los Angeles (UCLA); and Columbia University in New York City. In addition, Cornell hosts the Nanobiotechnology Center.

Universities with departments specializing in nanopatterning and assembly include Northwestern University in Evanston, Illinois, and the Massachusetts Institute of Technology (MIT) in Cambridge. Biological and environmental-based studies of nanoscience exist at the University of Pennsylvania in Philadelphia, Rice University in Houston, and the University of Michigan in Ann Arbor. Studies in nanomaterials are taking place at the University of California at Berkeley and the University of Illinois in Urbana-Champaign. Other university-affiliated departments engaged in nanotechnology research include the Nanotechnology Center at Purdue University in West Lafayette, Indiana; the University of South Carolina NanoCenter in Columbia; the Nanomanufacturing Research Institute at Northeastern University in Boston, Massachusetts; and the Center for Nano Science and Technology at Notre Dame University in South Bend, Indiana. By 2003 more than 100 U.S. universities had departments or research institutes specializing in nanotechnology.

Other major research efforts are taking place at national laboratories, such as the Center for Integrated Nanotechnologies at Sandia National Laboratories in Albuquerque and at Los Alamos National Laboratory, both in New Mexico; the Center for Nanophase Materials Sciences at Oak Ridge National Laboratory in Tennessee; the Center for Functional Nanomaterials at Brookhaven National Laboratory in Upton, New York; the Center for Nanoscale Materials at Argonne National Laboratory outside Chicago, Illinois; and the Molecular Foundry at the Lawrence Berkeley National Laboratory in Berkeley, California.

Internationally, the Max-Planck Institutes in Germany, the Centre National de la Recherche Scientifique (CNRS) in France, and the National Institute of Advanced Industrial Science and Technology of Japan are all engaged in nanotechnology research.

 

 

Possibilities of various non-conventional energy sources

As the demand for energy increase the demand for non-conventional sources of energy such as sun, wind, tide, biogas etc…. is increasing. The particularities of these sources of energy are easily available, renewable, suitable for environment, less hazardous, free from pollution, need not much effort. The need of our electricity increases day by day. To depend solely upon conventional energy resources is not conventional energy resources is not a wise policy. In our country there are many possibilities for the development of non-convention energy sources.

Non-conventional energy resources are renewable, easily available, eco-friendly, pollution free. Solar energy, wind energy, tidal energy, bio gas, geo thermal energy is the various non-conventional energy resources.

Importance and advantages of nonconventional energy over conventional energy source

 

  • Solar energy

There is much more scope for tapping solar energy. Solar energy is used in different part of the country. Sunlight can be directly converted to electricity through the photo voltaic technology. It is possible through this method from I sq. km area. Solar energy is most commonly used for following purpose for cooking, for generating electricity etc… advantages of solar energy is there is no expense for fuel, no environmental pollution, available in abundance and very rare accident causing.

  • Wind energy

The energy from the wind is used to generate electricity. Blades fixed to a shaft rotate in the air. The kinetic energy acquired by the shaft makes is rotates and electricity is produced. 2000 megawatt electricity can be produced from wind mills in our country.

  • Bio-gas

Bio gas contains 55% methane and 45% co2, bushes wastes from crops, human and animal wastes are used to produce the gas. Decaying plants material produces the gas methane. Bio gas can give higher temperature compared with kerosene and charcoal. It is also known as Gobar gas. Cow dung is largely used for producing Gobar gas.

  • Tidal energy

It can be considered as the least expensive sources of energy. Once it is fully developed fuel cost is null. Tidal power can be classified into three generating methods:

Tidal stream generator

Tidal stream generators (or TSGs) make use of the kinetic energy of moving water to power turbines, in a similar way to wind turbines that use moving air.

Tidal barrage

Tidal barrages make use of the potential energy in the difference in height (or head) between high and low tides. Barrages are essentially dams across the full width of a tidal estuary.

Dynamic tidal power

Dynamic tidal power (or DTP) is a theoretical generation technology that would exploit an interaction between potential and kinetic energies in tidal flows. It proposes that very long dams (for example: 30–50 km length) be built from coasts straight out into the sea or ocean, without enclosing an area. Tidal phase differences are introduced across the dam, leading to a significant water-level differential in shallow coastal seas – featuring strong coast-parallel oscillating tidal currents such as found in the UK, China and Korea.

  • Wave energy

Projects for producing electricity from waves have been established in many parts of the world. Waves are made to move into a trapped air column. As the waves rise in the sea outside the column, water inside the chamber also rises forcing a pocket. As the waves fall, air is sucked back in from the atmosphere causing the turbine to spin again. Thus electricity is produced.

  • Geothermal energy

Geothermal energy is thermal energy generated and stored in the Earth. Thermal energy is energy that determines the temperature of matter. Earth’s geothermal energy originates from the original formation of the planet, from radioactive decay of minerals and from volcanic activity. The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface.

Conclusion

Now a day’s energy crisis is becoming a great problem. Before these problems non-conventional energy source slant a major role to conserve energy. By making use of non-conventional energy sources across the world, we can generate hundreds of mw electricity.

 

Remote sensing

Remote sensing can be defined as the collection of data about an object from a distance. Humans and many other types of animals accomplish this task with aid of eyes or by the sense of smell or hearing. Earth scientists use the technique of remote sensing to monitor or measure phenomena found in the Earth’s lithosphere, biosphere, hydrosphere, and atmosphere. Remote sensing of the environment by geographers is usually done with the help of mechanical devices known as remote sensors. These gadgets have a greatly improved the ability to receive and record information about an object without any physical contact. Often, these sensors are positioned away from the object of interest by using helicopters, planes, and satellites. Most sensing devices record information about an object by measuring an object’s transmission of electromagnetic from reflecting and radiating surfaces. These sensors are either passive or active. Passive sensors detect energy when the naturally occurring energy is available such as sun energy. Active sensors provide their own energy source as radar waves and record its reflection on the target.

Remote sensing imagery has many applications in mapping land-use and cover, agriculture, soils mapping, forestry, city planning, archaeological investigations, military observation, and geomorphological surveying, among other uses. For example, foresters use aerial photographs for preparing forest cover maps, locating possible access roads, and measuring quantities of trees harvested. Specialized photography using color infrared film has also been used to detect disease and insect damage in forest trees.

Applications of remote sensing data

  • Conventional radar is mostly associated with aerial traffic control, early warning, and certain large scale meteorological data. Doppler radar is used by local law enforcements’ monitoring of speed limits and in enhanced meteorological collection such as wind speed and direction within weather systems. Other types of active collection include plasmas in the ionosphere. Interferometric synthetic aperture radar is used to produce precise digital elevation models of large scale terrain .Laser and radar altimeters on satellites have provided a wide range of data. By measuring the bulges of water caused by gravity, they map features on the seafloor to a resolution of a mile or so. By measuring the height and wave-length of ocean waves, the altimeters measure wind speeds and direction, and surface ocean currents and directions.
  • Light detection and ranging (LIDAR) is well known in examples of weapon ranging, laser illuminated homing of projectiles. LIDAR is used to detect and measure the concentration of various chemicals in the atmosphere, while airborne LIDAR can be used to measure heights of objects and features on the ground more accurately than with radar technology. Vegetation remote sensing is a principal application of LIDAR.
  • Radiometers and photometers are the most common instrument in use, collecting reflected and emitted radiation in a wide range of frequencies. The most common are visible and infrared sensors, followed by microwave, gamma ray and rarely, ultraviolet. They may also be used to detect the emission spectra of various chemicals, providing data on chemical concentrations in the atmosphere.
  • Stereographic pairs of aerial photographs have often been used to make topographic maps by imagery and terrain analysts in traffic ability and highway departments for potential routes.
  • Simultaneous multi-spectral platforms such as Land sat have been in use since the 70’s. These thematic mappers take images in multiple wavelengths of electro-magnetic radiation (multi-spectral) and are usually found on Earth observation satellites, including (for example) the Land sat program or the IKONOS satellite. Maps of land cover and land use from thematic mapping can be used to prospect for minerals, detect or monitor land usage, deforestation, and examine the health of indigenous plants and crops, including entire farming regions or forests.
  • Hyper spectral imaging produces an image where each pixel has full spectral information with imaging narrow spectral bands over a contiguous spectral range. Hyper spectral imagers are used in various applications including mineralogy, biology, defense, and environmental measurements.
  • Within the scope of the combat against desertification, remote sensing allows to follow-up and monitor risk areas in the long term, to determine desertification factors, to support decision-makers in defining relevant measures of environmental management, and to assess their impacts.

Use of remote sensing in farming.

When farmers or ranchers observe their fields or pastures to assess their condition without physically touching them, it is a form of remote sensing. Observing the colors of leaves or the overall appearances of plants can determine the plant’s condition. Remotely sensed images taken from satellites and aircraft provide a means to assess field conditions without physically touching them from a point of view high above the field.

Most remote sensors see the same visible wavelengths of light that are seen by the human eye, although in most cases remote sensors can also detect energy from wavelengths that are undetectable to the human eye. The remote view of the sensor and the ability to store, analyze, and display the sensed data on field maps are what make remote sensing a potentially important tool for agricultural producers. Agricultural remote sensing is not new and dates back to the 1950s, but recent technological advances have made the benefits of remote sensing accessible to most agricultural producers.


Making use of remote sensing in farm.

Remotely sensed images can be used to identify nutrient deficiencies, diseases, water deficiency or surplus, weed infestations, insect damage, hail damage, wind damage, herbicide damage, and plant populations.

Information from remote sensing can be used as base maps in variable rate applications of fertilizers and pesticides. Information from remotely sensed images allows farmers to treat only affected areas of a field. Problems within a field may be identified remotely before they can be visually identified.

Ranchers use remote sensing to identify prime grazing areas, overgrazed areas or areas of weed infestations. Lending institutions use remote sensing data to evaluate the relative values of land by comparing archived images with those of surrounding fields.


The Electromagnetic Spectrum

The basic principles of remote sensing with satellites and aircraft are similar to visual observations. Energy in the form of light waves travels from the sun to Earth. Light waves travel similarly to waves traveling across a lake. The distance from the peak of one wave to the peak of the next wave is the wavelength. Energy from sunlight is called the electromagnetic spectrum.

The wavelengths used in most agricultural remote sensing applications cover only a small region of the electromagnetic spectrum. Wavelengths are measured in micrometers (µm) or nanometers (nm). One um is about .00003937 inch and 1 µm equals 1,000 nm. The visible region of the electromagnetic spectrum is from about 400 nm to about 700 nm. The green color associated with plant vigor has a wavelength that centers near 500 nm.

Wavelengths longer than those in the visible region and up to about 25 µm are in the infrared region. The infrared region nearest to that of the visible region is the near infrared (NIR) region. Both the visible and infrared regions are used in agricultural remote sensing.


Electromagnetic Energy and Plants

When electromagnetic energy from the sun strikes plants, three things can happen. Depending upon the wavelength of the energy and characteristics of individual plants, the energy will be reflected, absorbed, or transmitted. Reflected energy bounces off leaves and is readily identified by human eyes as the green color of plants. A plant looks green because the chlorophyll in the leaves absorbs much of the energy in the visible wavelengths and the green color is reflected. Sunlight that is not reflected or absorbed is transmitted through the leaves to the ground.

Interactions between reflected, absorbed, and transmitted energy can be detected by remote sensing. The differences in leaf colors, textures, shapes or even how the leaves are attached to plants, determine how much energy will be reflected, absorbed or transmitted. The relationship between reflected, absorbed and transmitted energy is used to determine spectral signatures of individual plants. Spectral signatures are unique to plant species.

Remote sensing is used to identify stressed areas in fields by first establishing the spectral signatures of healthy plants. The spectral signatures of stressed plants appear altered from those of healthy plants. Figure 3 compares the spectral signatures of healthy and stressed sugar beets.

Stressed sugar beets have a higher reflectance value in the visible region of the spectrum from 400-700 nm. This pattern is reversed for stressed sugar beets in the no visible range from about 750-1200 nm. The visible pattern is repeated in the higher reflectance range from about 1300-2400 nm. Interpreting the reflectance values at various wavelengths of energy can be used to assess crop health.

The comparison of the reflectance values at different wavelengths, called a vegetative index, is commonly used to determine plant vigor. The most common vegetative index is the normalized difference vegetative index (NDVI). NDVI compares the reflectance values of the red and NIR regions of the electromagnetic spectrum. The NDVI value of each area on an image helps identify areas of varying levels of plant vigor within fields.


How Does Remote Sensing Work?

There are several types of remote sensing systems used in agriculture but the most common is a passive system that senses the electromagnetic energy reflected from plants. The sun is the most common source of energy for passive systems. Passive system sensors can be mounted on satellites, manned or unmanned aircraft, or directly on farm equipment.

There are several factors to consider when choosing a remote sensing system for a particular application, including spatial resolution, spectral resolution, radiometric resolution, and temporal resolution.

Spatial resolution refers to the size of the smallest object that can be detected in an image. The basic unit in an image is called a pixel. One-meter spatial resolution means each pixel image represents an area of one square meter. The smaller an area represented by one pixel, the higher the resolution of the image.

Spectral resolution refers to the number of bands and the wavelength width of each band. A band is a narrow portion of the electromagnetic spectrum. Shorter wavelength widths can be distinguished in higher spectral resolution images. Multi-spectral imagery can measure several wavelength bands such as visible green or NIR. Landsat, Quick bird and Spot satellites use multi-spectral sensors. Hyper spectral imagery measures energy in narrower and more numerous bands than multi-spectral imagery. The narrow bands of hyper spectral imagery are more sensitive to variations in energy wavelengths and therefore have a greater potential to detect crop stress than multi-spectral imagery. Multi-spectral and hyper spectral imagery are used together to provide a more complete picture of crop conditions.

Radiometric resolution refers to the sensitivity of a remote sensor to variations in the reflectance levels. The higher the radiometric resolution of a remote sensor, the more sensitive it is to detecting small differences in reflectance values. Higher radiometric resolution allows a remote sensor to provide a more precise picture of a specific portion of the electromagnetic spectrum.

Temporal resolution refers to how often a remote sensing platform can provide coverage of an area. Geo-stationary satellites can provide continuous sensing while normal orbiting satellites can only provide data each time they pass over an area. Remote sensing taken from cameras mounted on airplanes is often used to provide data for applications requiring more frequent sensing. Cloud cover can interfere with the data from a scheduled remotely sensed data system. Remote sensors located in fields or attached to agricultural equipment can provide the most frequent temporal resolution.

Remote sensing can be defined as the collection of data about an object from a distance. Humans and many other types of animals accomplish this task with aid of eyes or by the sense of smell or hearing. Earth scientists use the technique of remote sensing to monitor or measure phenomena found in the Earth’s lithosphere, biosphere, hydrosphere, and atmosphere. Remote sensing of the environment by geographers is usually done with the help of mechanical devices known as remote sensors. These gadgets have a greatly improved the ability to receive and record information about an object without any physical contact. Often, these sensors are positioned away from the object of interest by using helicopters, planes, and satellites. Most sensing devices record information about an object by measuring an object’s transmission of electromagnetic from reflecting and radiating surfaces. These sensors are either passive or active. Passive sensors detect energy when the naturally occurring energy is available such as sun energy. Active sensors provide their own energy source as radar waves and record its reflection on the target.

Remote sensing imagery has many applications in mapping land-use and cover, agriculture, soils mapping, forestry, city planning, archaeological investigations, military observation, and geomorphological surveying, among other uses. For example, foresters use aerial photographs for preparing forest cover maps, locating possible access roads, and measuring quantities of trees harvested. Specialized photography using color infrared film has also been used to detect disease and insect damage in forest trees.

 Applications of remote sensing data

  • Conventional radar is mostly associated with aerial traffic control, early warning, and certain large scale meteorological data. Doppler radar is used by local law enforcements’ monitoring of speed limits and in enhanced meteorological collection such as wind speed and direction within weather systems. Other types of active collection include plasmas in the ionosphere. Interferometric synthetic aperture radar is used to produce precise digital elevation models of large scale terrain .Laser and radar altimeters on satellites have provided a wide range of data. By measuring the bulges of water caused by gravity, they map features on the seafloor to a resolution of a mile or so. By measuring the height and wave-length of ocean waves, the altimeters measure wind speeds and direction, and surface ocean currents and directions.
  • Light detection and ranging (LIDAR) is well known in examples of weapon ranging, laser illuminated homing of projectiles. LIDAR is used to detect and measure the concentration of various chemicals in the atmosphere, while airborne LIDAR can be used to measure heights of objects and features on the ground more accurately than with radar technology. Vegetation remote sensing is a principal application of LIDAR.
  • Radiometers and photometers are the most common instrument in use, collecting reflected and emitted radiation in a wide range of frequencies. The most common are visible and infrared sensors, followed by microwave, gamma ray and rarely, ultraviolet. They may also be used to detect the emission spectra of various chemicals, providing data on chemical concentrations in the atmosphere.
  • Stereographic pairs of aerial photographs have often been used to make topographic maps by imagery and terrain analysts in traffic ability and highway departments for potential routes.
  • Simultaneous multi-spectral platforms such as Land sat have been in use since the 70’s. These thematic mappers take images in multiple wavelengths of electro-magnetic radiation (multi-spectral) and are usually found on Earth observation satellites, including (for example) the Land sat program or the IKONOS satellite. Maps of land cover and land use from thematic mapping can be used to prospect for minerals, detect or monitor land usage, deforestation, and examine the health of indigenous plants and crops, including entire farming regions or forests.
  • Hyper spectral imaging produces an image where each pixel has full spectral information with imaging narrow spectral bands over a contiguous spectral range. Hyper spectral imagers are used in various applications including mineralogy, biology, defense, and environmental measurements.
  • Within the scope of the combat against desertification, remote sensing allows to follow-up and monitor risk areas in the long term, to determine desertification factors, to support decision-makers in defining relevant measures of environmental management, and to assess their impacts.

Information and communications technology

Information and communications technology usually called ICT, is often used as an extended synonym for information technology (IT), but is usually a more general term that stresses the role of unified communications and the integration of telecommunications (telephone lines and wireless signals), intelligent building management systems and audio-visual systems in modern information technology. ICT consists of all technical means used to handle information and aid communication, including computer and network hardware, communication middleware as well as necessary software. In other words, ICT consists of IT as well as telephony, broadcast media, all types of audio and video processing and transmission and network based control and monitoring functions. The expression was first used in 1997 in a report by Dennis Stevenson to the UK government and promoted by the new National Curriculum documents for the UK in 2000.

ICT is often used in the context of “ICT roadmap” to indicate the path that an organization will take with their ICT needs.
The term ICT is now also used to refer to the merging (convergence) of audio-visual and telephone networks with computer networks through a single cabling or link system. There are large economic incentive

s (huge cost savings due to elimination of the telephone network) to merge the audio-visual, building management and telephone network with the computer network system using a single unified system of cabling, signal distribution and management. This in turn has spurred the growth of organizations with the term ICT in their names to indicate their specialization in the process of merging the different network systems.
“ICT” is used as a general term for all kinds of technologies which enable users to create access and manipulate information. ICT is a combination of information technology and communications technology.
In an increasingly interconnected world, the interactions among devices, systems, and people are growing rapidly. Businesses need to meet the demands of their employees and customers to allow for greater access to systems and information. All of these communications needs must be delivered in a unified way. By offering a scalable infrastructure, cloud computing models enable companies to work smarter through more agile and cost-effective access to technology and information. This unified platform reduces costs and boosts productivity across a business and beyond. Part of an information and communications technology roadmap should involve consolidating infrastructures, while providing added benefits to users in collaboration, messaging, calendaring, instant messaging, audio, video, and Web conferencing. Cloud computing is driving more efficient IT consumption and delivery and taking ICT to the next level.

ICT in Society
Information technology has taken over every aspect of our daily lives from commerce to leisure and even culture. Today, mobile phones, desktop computers, hand held devices, emails and the use of Internet has become a central part of our culture and society. ICT has made us a global society, where people can interact and communicate swiftly and efficiently.
ICT has contributed towards the elimination of language barriers. Examples of (ICT) tools are emails, instant messaging (IM), Chat rooms and social networking websites, such as Facebook and Twitter, Skype, iPhones, cellular phones and similar applications. A disadvantage is that older generations find it difficult to keep up with the ever changing technologies available today. The resistance to change and inability to keep up with rapid technology evolution are areas to note. Many people in society are not in a position to take advantage of available technology. This may be due to poverty, geographical location or lack of access to technology
ICT in Education
In current education systems worldwide, ICT has not been as extensively implemented as might be found in other fields, such as business. Reasons for the absence of these technologies in education vary. Some experts suggest it is the high costs associated with implementing these technologies that prevents schools from using them in the classroom. Other experts argue that the social nature of current education systems, which require a substantial amount of personal contact between teachers and their students,

prevents these technologies from being better integrated in the classroom setting.

Uses
The use of ICTs in education extends beyond equipping classrooms with computers and an Internet connection. There are a wide variety of ICTs currently available to schools and universities that can be implemented to enhance students’ overall learning experiences in numerous ways. Those schools and universities that have implemented ICTs primarily use these technologies to fulfill three objectives:
 Increase Networking Opportunities: ICTs helps connect schools to other schools, as well as individuals within those schools to one another. This ability to network is especially important for students in rural areas and students in developing countries.
 Provide Distance Learning: With the advent of ICTs, learning has become Web-based. As a result, ICTs have started to replace correspondence schools.
 Supplement Traditional Learning: One of the most common uses of ICTs in education involves students using software programs such as Microsoft Word to produce otherwise traditional written assignments.

By teenstudents Posted in tech