Saturday, 2 January 2016

Graphics Processing Unit

Definition of: GPU

(Graphics Processing Unit) A programmable logic chip that renders images, animations and video for the computer's screen. GPUs are located on plug-in cards, in a chipset on the motherboard or in the same chip as the CPU (see diagram below).

A GPU performs parallel operations on data to render images for the screen. Although it is used for 2D data as well as for zooming and panning the screen, a GPU is essential for smooth decoding and rendering of 3D animations and video. The more sophisticated the GPU, the higher the resolution and the faster and smoother the motion in games and movies. GPUs on stand-alone cards include their own memory, while GPUs in the chipset or CPU chip share main memory with the CPU.

Not Just Graphics Processing
Since GPUs perform parallel operations on multiple sets of data, they are increasingly used as vector processors for non-graphics applications that require repetitive computations. For example, in 2010, a Chinese supercomputer achieved the record for top speed using more than seven thousand GPUs in addition to its CPUs (see GPGPU). See graphics pipeline and multi-GPU.

Graphics Hardware Locations
In a PC, graphics rendering originally took place in the CPU only. Over time, functions were offloaded to separate circuits and then to GPUs either in separate cards, the chipset or the CPU chip itself. See display adapter, integrated graphics and integrated GPU.

An Integrated GPU
This Trinity chip from AMD integrates a sophisticated GPU with four cores of x86 processing and a DDR3 memory controller. Each x86 section is a dual-core CPU with its own L2 cache.

ZigBee

ZigBee is an IEEE 802.15.4-based specification for a suite of high-level communication protocols used to create personal area networks with small, low-power digital radios.
Image result for ZigbeeThe technology defined by the ZigBee specification is intended to be simpler and less expensive than other wireless personal area networks (WPANs), such as Bluetooth or WiFi. Applications include wireless light switches, electrical meters with in-home-displays, traffic management systems, and other consumer and industrial equipment that requires short-range low-rate wireless data transfer.
Its low power consumption limits transmission distances to 10–100 meters line-of-sight, depending on power output and environmental characteristics. ZigBee devices can transmit data over long distances by passing data through a mesh network of intermediate devices to reach more distant ones. ZigBee is typically used in low data rate applications that require long battery life and secure networking (ZigBee networks are secured by 128 bit symmetric encryption keys.) ZigBee has a defined rate of 250 kbit/s, best suited for intermittent data transmissions from a sensor or input device.

The E-Ball

The E-Ball concept PC is a sphere shaped computer which is the smallest design among all the laptops and desktops. Soon E-BALL based computer will be the new pc to be used in the near future.
The E-ball technology for the future pc was proposed by Apostol Tnokovski who is a product designer in Marcedonia . The E-ball is the smallest pc that has ever been designed till now. It is not going to be like a PDA but like a PC with features of a conventional computer. The E-BALL is a sphere shaped computer. Apostol Tnokovski decided to give sphere shape to the pc because he considered the sphere to be the most attractive shape in the nature that gathers the attention of all. The body of E-BALL is made up of aluminium and plastic parts. E-BALL is the smallest among all the PCs developed so far as it has 120x120mm motherboard and is 6 inch in diameter. The E-BALL pc has all the features and elements of conventional computer like mouse, keyboard, displayetc


Sixth Sense Technology

Image result for working of sixth sense deviceSixthSense is a gesture-based wearable computer system developed at MIT Media Lab by Steve Mann in 1994 and 1997
The SixthSense technology contains a pocket projector, and a camera contained in a head-mounted, handheld or pendant-like, wearable device. Both the projector and the camera are connected to a mobile computing device in the user’s pocket. The projector projects visual information enabling surfaces, walls and physical objects around us to be used as interfaces; while the camera recognizes and tracks users' hand gestures and physical objects using computer-vision based techniques. The software program processes the video stream data captured by the camera and tracks the locations of the colored markers (visual tracking fiducials) at the tips of the user’s fingers. The movements and arrangements of these fiducials are interpreted into gestures that act as interaction instructions for the projected application interfaces. SixthSense supports multi-touch and multi-user interaction.

Image result for working of sixth sense device

Example applications

  • Four colored cursors are controlled by four fingers wearing different colored markers in real time. The projector displays video feedback to the user on a vertical wall.
  • The projector displaying a map on the wall, and the user controlling it using zoom and pan gestures.
  • The user can make a frame gesture to instruct the camera take a picture. It is hinted that the photo will be automatically cropped to remove the user's hands.
  • The system could project multiple photos on a wall, and the user could sort, re-size and organize them with gestures. This application was called Reality Window Manager (RWM) in Mann's headworn implementation of Sixth Sense.[13]
  • A number pad is projected onto the user's palm, and the user can dial a phone number by touching his palm with a finger. It was hinted that the system is able to pin point the location of the palm, and that the camera and projector are able to adjust themselves for surfaces that are not horizontal.
  • The user can pick up a product in supermarket (e.g. a package of paper towels), and the system could display related information (e.g. the amount of bleach used) back on the product itself.
  • The system can recognize any book picked up by the user and display Amazon rating on the book cover.
  • As the user opens a book, the system can display additional information such as reader's comments.
  • The system is able to recognize individual pages of a book and display annotation by the user's friend. This demo also suggested the system would be able to handle tilted surface.
  • The system is able to recognize newspaper articles and project the most recent video on the news event on a blank region of the newspaper.
  • The system is able to recognize people by their appearances and project a word cloud of related information retrieved from the internet on the person's body.
  • The system is able to recognize a boarding pass and display related information such as flight delay and gate change.
  • The user can draw a circle on his or her wrist, and the system will project a clock on it.





  • Gesture recognition technology

    Gesture recognition, along with facial recognition, voice recognition, eye tracking and lip movement recognition are components of what developers refer to as a perceptual user interface (PUI). The goal of PUI is to enhance the efficiency and ease of use for the underlying logical design of a stored program, a design discipline known as usability.
    In personal computing, gestures are most often used for input commands. Recognizing gestures as input allows computers to be more accessible for the physically-impaired and makes interaction more natural in a gaming or 3-D virtual reality environment. Hand and body gestures can be amplified by a controller that contains accelerometers and gyroscopes to sense tilting, rotation and acceleration of movement -- or the computing device can be outfitted with a camera so that software in the device can recognize and interpret specific gestures. A wave of the hand, for instance, might terminate the program.
    In addition to the technical challenges of implementing gesture recognition, there are also social challenges. Gestures must be simple, intuitive and universally acceptable. The study of gestures and other nonverbal types of communication is known as kinesics.

    Fog computing

    Fog computing, also known as fogging, is a distributed computing infrastructure in which some application services are handled at the network edge in a smart device and some application services are handled in a remote data center -- in the cloud. The goal of fogging is to improve efficiency and reduce the amount of data that needs to be transported to the cloud for data processing, analysis and storage. This is often done for efficiency reasons, but it may also be carried out for security and compliance reasons.
    Image result for fog computing
    In a fog computing environment, much of the processing takes place in a data hub on a smart mobile device or on the edge of the network in a smart router or other gateway device. This distributed approach is growing in popularity because of the Internet of Things (IoT) and the immense amount of data that sensors generate. It is simply inefficient to transmit all the data a bundle of sensors creates to the cloud for processing and analysis; doing so requires a great deal of bandwidth and all the back-and-forth communication between the sensors and the cloud can negatively impact performance. Although latency may simply be annoying when the sensors are part of a gaming application, delays in data transmission can be life-threatening if the sensors are part of a vehicle-to-vehicle communication system or large-scale distributed control system for rail travel.
    The term fog computing is often associated with Cisco. "Cisco Fog Computing" is a registered name; “fog computing” is open to the community at large. The choice of the word "fog" is meant to convey the idea that the advantages of cloud computing can -- and should -- be brought closer to the data source. (In meteorology, fog is simply a cloud that is close to the ground.)

    Cloud Computing

    Image result for Cloud Computing?In the simplest terms, cloud computing means storing and accessing data and programs over the Internet instead of your computer's hard drive. The cloud is just a metaphor for the Internet. It goes back to the days of flowcharts and presentations that would represent the gigantic server-farm infrastructure of the Internet as nothing but a puffy, white cumulonimbus cloud, accepting connections and doling out information as it floats.
    What cloud computing is not about is your hard drive. When you store data on or run programs from the hard drive, that's called local storage and computing. Everything you need is physically close to you, which means accessing your data is fast and easy, for that one computer, or others on the local network. Working off your hard drive is how the computer industry functioned for decades; some would argue it's still superior to cloud computing, for reasons I'll explain shortly.
    The cloud is also not about having a dedicated network attached storage (NAS) hardware or server in residence. Storing data on a home or office network does not count as utilizing the cloud. (However, some NAS will let you remotely access things over the Internet, and there's at least one NAS named "My Cloud," just to keep things confusing.)
    For it to be considered "cloud computing," you need to access your data or your programs over the Internet, or at the very least, have that data synchronized with other information over the Web. In a big business, you may know all there is to know about what's on the other side of the connection; as an individual user, you may never have any idea what kind of massive data-processing is happening on the other end. The end result is the same: with an online connection, cloud computing can be done anywhere, anytime.

    Friday, 1 January 2016

    Motherboard

    A motherboard is one of the most essential parts of a computer system. It holds together many of the crucial components of a computer, including the central processing unit (CPU), memory and connectors for input and output device.

    Parts of a Motherboard

    If you were to open up your computer and take out the motherboard, you would probably get pretty confused about all the different parts. Depending on the make and model of your computer, it might look something like the picture below.


    To understand how computers work you don't need to know every single part of the motherboard. However, it is good to know some of the most important parts and how the motherboard connects the various parts of a computer system together. Some of the typical parts are described below - they are also labeled in the next photograph:
    computer motherboard with labels
    • A CPU socket - the actual CPU is directly soldered onto this socket. Since high speed CPUs generate a lot of heat, there are heat sinks and mounting points for fans right next to the CPU socket.
    • A power connector to distribute power to the CPU and other components.
    • Slots for the system's main memory, typically in the form of DRAM chips.
    • A chip forms an interface between the CPU, the main memory and other components. On many types of motherboards this is referred to as the Northbridge. This chip also contains a large heat sink.
    • A second chip controls the input and output (I/O) functions. It is not connected directly to the CPU but to the Northbridge. This I/O controller is referred to as the Southbridge. The Northbridge and Southbridge combined are referred to as the chipset.
    • Several connectors, which provide the physical interface between input and output devices and the motherboard. The Southbridge handles these connections.
    • Slots for one or more hard drives to store files. The most common types of connections are Integrated Drive Electronics (IDE) and Serial Advanced Technology Attachment (SATA).
    • A Read-only memory (ROM) chip, which contains the firmware, or startup instructions for the computer system. This is also called the BIOS.
    • A slot for a video or graphics card. There are a number of different types of slots, including Accelerated Graphics Port (AGP) and Peripheral Component Interconnect Express (PCIe).
    • Additional slots to connect hardware in the form of Peripheral Component Interconnect (PCI) slots.

    Google Glass

    Image result for 1. Google Glass
    Google Glass is a headset, or optical head-mounted display that is worn like a pair of eyeglasses. It was developed[with the mission of producing a ubiquitous computer Google Glass displayed information in a smartphone like hands-free format.Wearers communicated with the Internet via natural language voice commands



    Features

    • Touchpad: A touchpad is located on the side of Google Glass, allowing users to control the device by swiping through a timeline-like interface displayed on the screen. Sliding backward shows current events, such as weather, and sliding forward shows past events, such as phone calls, photos, circle updates
    • Camera: Google Glass has the ability to take photos and record 720p HD video.
    • Display: The Explorer version of Google Glass uses a liquid crystal on silicon (LCoS)(based on an LCoS chip from Himax), field-sequential color system LED illuminated display.The display's LED illumination is first P-polarized and then shines through the in-coupling polarizing beam splitter (PBS) to the LCoS panel. The panel reflects the light and alters it to S-polarization at active pixel sensor sites. The in-coupling PBS then reflects the S-polarized areas of light at 45° through the out-coupling beam splitter to a collimating reflector at the other end. Finally, the out-coupling beam splitter (which is a partially reflecting mirror, not a polarizing beam splitter) reflects the collimated light another 45° and into the wearer's eye.