CYBER SECURITY

WHAT IS CYBER SECURITY ?

Cyber security is the state or process of protecting and recovering networks, devices and programs from any type of cyberattack or unauthorized access

The main purpose of cyber security is to protect all the organizational assets from both external and internal threats as well as disruptions caused due to natural disasters. 

As organizational assets are made up of multiple disparate systems, an effective and efficient cyber security posture requires coordinated efforts across all its information systems. Therefore, cyber security is composed of the following sub-domains:

Network Security protects network traffic by controlling incoming and outgoing connections to prevent threats from entering or spreading throughout the network.

Data Loss Prevention (DLP) protects data by focusing on the location, classification and monitoring information at rest, in use and in motion.

Intrusion Detection Systems (IDS) or Intrusion Prevention Systems (IPS) work to identify potentially dangerous cyber activity.

Identity and Access Management (IAM) uses authentication services to limit and track employee access to protect internal systems from malicious entities.

Encryption is the process of encoding data to render it unintelligible, and is often used during data transfer to prevent theft in transit.

Antivirus/anti-malware solutions scan computer systems for known threats. Modern solutions are able to detect previously unknown threats based on their behavior.

Application Security

Application security  implements various defenses within all software and services used within an organization against a wide range of threats. It requires designing secure application architectures, writing secure code, implementing strong data input validation, threat modeling, etc. to minimize the likelihood of any unauthorized access or modification of application resources. 

Mobile Security

Mobile security refers to protecting both organizational and personal information stored on mobile devices like cell phones, laptops, tablets, etc. from various threats such as unauthorized access, device loss , malware, etc. 

Cloud Security

Cloud security relates to designing secure cloud architectures and applications for organization using various cloud service providers such as AWS, Google, Azure, Rackspace, etc. Effective architecture and environment configuration ensures protection against many threats. 

Disaster recovery and business continuity planning (DR&BC)

DR&BC deals with processes, monitoring, alerts and plans that help organizations prepare to keep business critical systems online during and after any kind of a disaster as well as resuming lost operations and systems after an incident. 

User education

 Training individuals regarding topics on computer security is essential in raising awareness about industry best practices, organizational procedures and policies as well as monitoring and reporting malicious activities. 

Common types of cyber threats

Malware

Anti, antivirus, block, malware, protection, shield, virus icon

  Malicious software such as computer viruses, spyware, Trojan horses, and keyloggers.

Ransomware

How To Reduce Your Chances Of Getting Hit With Ransomware

  Malware that locks data until a ransom is paid.

Phishing Attacks 

4 Phishing Attack Trends of 2019

 The practice of obtaining sensitive information (e.g., passwords, credit card information) through a disguised email, phone call, text message.

Social engineering 

Several Social Engineering Tricks - By David Balaban

 The psychological manipulation of individuals to obtain confidential information; often overlaps with phishing.

Advanced Persistent Threat

 An attack in which an unauthorized user gains access to a system or network and remains there for a period of time without being detected.

WHAT IS SECURITY BREACH ?

Most US, UK Firms Admit Major Security Breach In Past Two Years ...

security breach occurs when an intruder gains unauthorized access to an organization’s protected systems and data. Cyber criminals or malicious applications bypass security mechanisms to reach restricted areas. A security breach is an early-stage violation that can lead to things like system damage and also data loss.

11 top cyber security best practices to prevent a breach

1.       Conduct cyber security training and awareness

A strong cyber security strategy would not be successful if the employees are not educated on topics of cyber security, company policies and incidence reporting. Even the best technical defenses may fall apart when employees make unintentional or intentional malicious actions resulting in a costly security breach. Educating employees and raising awareness of company policies and security best practices through seminars, classes, online courses is the best way to reduce negligence and security violation.

2.       Perform risk assessments

Organizations should perform a formal risk assessment to identify all valuable data and prioritize them based on the impact caused by an asset when its compromised.  This will help organizations decide how to best spend their resources on securing each valuable asset.

3.       Ensure vulnerability management and software patch management/updates

It is crucial for organizational IT teams to perform identification, classification, remediation, and mitigation of vulnerabilities within all software and networks that it uses, to reduce threats against their IT systems. Furthermore, security researchers and attackers identify new vulnerabilities within various software every now and then which are reported back to the software vendors or released to the public. These vulnerabilities are often exploited by malware and cyber attackers. Software vendors periodically release updates which patch and mitigate these vulnerabilities. Therefore, keeping IT systems up-to-date helps to protect organizational assets.

4.       Use the principle of least privilege

The principle of least privilege states that both software and personnel should be allotted the least amount of permissions necessary to perform their duties. This helps limits the damage of a successful security breach as user accounts/software having lower permissions would not be able to impact valuable assets that require a higher-level set of permissions. Also, two-factor authentication should be used for all high-level user accounts that have unrestricted permissions.

5.       Enforce secure password storage and policies

Organizations should enforce the use of strong passwords that adhere to industry recommended standards for all employees. They should also be forced to be periodically changed to help protect from compromised passwords. Furthermore, password storage should follow industry best practices of using salts and strong hashing algorithms.

6.       Implement a robust business continuity and incidence response (BC-IR) plan

Having a solid BC-IR plans and policies in place will help an organization effectively respond to cyber-attacks and security breaches while ensuring critical business systems remain online.

7.       Perform periodic security reviews

Having all software and networks go through periodic security reviews helps in identifying security issues early on and in a safe environment. Security reviews include application and network penetration testing, source code reviewsarchitecture design reviewsred team assessments, etc. Once security vulnerabilities are found, organizations should prioritize and mitigate them as soon as possible.  

8.       Backup data

Backing up all data periodically will increase redundancy and will make sure all sensitive data is not lost after a security breach. Attacks such as injections and ransomware, compromise the integrity and availability of data. Backups can help protect in such cases.

9.       Use encryption for data at rest and in transit

All sensitive information should be stored and transferred using strong encryption algorithms. Encrypting data ensures confidentiality. Effective key management and rotation policies should also be put in place. All web applications or software should employ the use of SSL/TLS.

10.   Design software and networks with security in mind

When creating applications or writing software or architecting networks, always design them with security in place. Bear in mind that the cost of refactoring software and adding security measures later on is far greater than building in security from the start. Security designed application help reduce the threats and ensure that when software/networks fail, they fail safe.

11.   Implement strong input validation and industry standards in secure coding

Strong input validation is often the first line of defense against various types of cyber attacks. Software and applications are designed to accept user input which opens it up to attacks and here is where strong input validation helps filter out malicious input payloads that the application would process. Furthermore, secure coding standards should be used when writing software as these helps avoid most of the prevalent vulnerabilities outlined in OWASP and CVE.

INTERNET OF THINGS(IoT)

Internet of things :-

The Internet of Things is the concept of connecting any device (so long as it has an on/off switch) to the Internet and to other connected devices. The IoT is a giant network of connected things – all of which collect and share data about the way they are used and about the environment around them.

That includes an extraordinary number of objects of all shapes and sizes – from smart microwaves, which automatically cooks your food for the right length of time, to self-driving cars, whose complex sensors detect objects in their path, to wearable fitness devices that measure your heart rate and the number of steps you’ve taken that day, then use that information to suggest exercise plans tailored to you. There are even connected footballs that can track how far and fast they are thrown and record those statistics via an app for future training purpose.

In simple words IoT is something in which smart devices are connected to the internet and then used to meet various purposes .

WORKING

How does the Internet of Things work? - Quora

A complete IoT system contains four distinct components: sensors/devices, connectivity, data processing, and a user interface.

1) Sensors/Devices

First, sensors or devices collect data from their environment. This could be as simple as a temperature reading or as complex as a full video feed.

I use “sensors/devices,” because multiple sensors can be bundled together or sensors can be part of a device that does more than just sense things. For example, your phone is a device that has multiple sensors like camera, accelerometer, GPS, etc, but your phone is not just a sensor.

However, whether it’s a standalone sensor or a full device, in this first step data is being collected from the environment.

Devices and objects with built in sensors are connected to an Internet of Things platform, which integrates data from the environment and applies analytics to share the most valuable information with applications built to address specific needs.

2) Connectivity

Next, that data is sent to the cloud (what’s the cloud?), but it needs a medium to get there!

the data can be sent through wifi,ethernet,WLAN,etc.

Each option has tradeoffs between power consumption, range and bandwidth (here’s a simple explanation). Choosing which connectivity option is best comes down to the specific IoT application, but they all accomplish the same task: getting data to the cloud.

3) Data Processing

Once the data gets to the cloud, software does some kind of processing on it.

This could be very simple, such as checking within the temperature reading is within an acceptable range or not or it could also be very complex, such as using computer vision on video to identify objects such as intruder in your house.

But what happens when the temperature is too high or if there is an intruder in your house? That’s where the user comes in.

4) User Interface

Next, the information is made useful to the end-user in some way. This could be via an alert to the user through email, text, notification, etc. For example, a text alert when the temperature is too high in the company’s cold storage.

Also, a user might have an interface that allows them to proactively check in on the system. For example, a user might want to check the video feeds in their house through a phone app or a web browser.

However, it’s not always a one-way street. Depending on the IoT application, the user may also be able to perform an action and affect the system. For example, the user might remotely adjust the temperature in the cold storage through an app on their phone.

And some actions are performed automatically . Rather than waiting for you to adjust the temperature, the system could do it automatically through predefined rules. And just calls you to alert you of the intruder, the IoT system could also automatically notify relevant authorities.

ARTIFICIAL INTELLIGENCE

WHAT IS AI ?

It all started out as science fiction: machines that can talk, machines that can think, machines that can feel. Although that last one may be impossible without sparking an entire world of debate regarding the existence of consciousness, scientists have certainly been making strides with the first two.

Artificial intelligence (AI) is the general field that covers everything that has to do with imbuing machines with “intelligence,” with the goal of emulating a human being’s unique reasoning faculties. Machine learning is a category within the larger field of artificial intelligence that is concerned with conferring upon machines the ability to “learn.”  This is achieved by using algorithms that discover patterns and generate insights from the data they are exposed to, for application to future decision-making and predictions, a process that sidesteps the need to be programmed specifically for every single possible action.

Deep learning, on the other hand, is a subset of machine learning: it’s the most advanced AI field, one that brings AI the closest to the goal of enabling machines to learn and think as much like humans.

In short, deep learning is a subset of machine learning, and machine learning falls within artificial intelligence.

How is artificial intelligence applied?

Image result for application of ai

Popular misconceptions tend to place AI on an island with robots and self-driving cars. However, this approach fails to recognize artificial intelligence’s major practical application i.e. processing the vast amounts of data generated daily.

By strategically applying AI to certain processes, insight gathering and task automation occur at an unimaginable rate and scale.

Parsing through the mountains of data created by humans, AI systems perform intelligent searches, interpreting both text and images to discover patterns in complex data, and then act on those learnings.

What are the basic components of artificial intelligence?

Many of AI’s revolutionary technologies are common buzzwords, like “natural language processing,” “deep learning,” and “predictive analytics.” Cutting-edge technologies that enable computer systems to understand the meaning of human language, learn from experience, and make predictions, respectively.

Understanding AI jargon is the key to facilitating discussion about the real-world applications of this technology. The technologies are disruptive, revolutionizing the way humans interact with data and make decisions, and should be understood in basic terms by all of us.

6 sub-categories of artificial intelligence forming a star shape.

Machine Learning|learning from experience

Machine learning is just one approach to reifying artificial intelligence, and ultimately eliminates the need to hand-code the software with a list of possibilities, and how the machine intelligence ought to react to each of them. Throughout 1949 until the late 1960s, American electric engineer Arthur Samuel worked hard on evolving artificial intelligence from merely recognizing patterns to learning from the experience, making him the pioneer of the field. He used a game of checkers for his research while working with IBM, and this subsequently influenced the programming of early IBM computers.

Current applications are becoming more and more sophisticated, making their way into complex medical applications.

Machine learning, or ML, is an application of AI that provides computer systems with the ability to automatically learn and improve from experience without being explicitly programmed. ML focuses on the development of algorithms that can analyze data and make predictions. Beyond recommending what Netflix movies you might like, or the best route for your Uber, machine learning is being applied to healthcare, pharma, and life sciences industries to aid disease diagnosis, medical image interpretation, and accelerate drug development.

Deep Learning|self-educating machines

As we go into higher and even more sophisticated levels of machine learning, deep learning comes into play. Deep learning requires a complex architecture that mimics a human brain’s neural networks in order to make sense of patterns, even with noise, missing details, and other sources of confusion. While the possibilities of deep learning are vast, so are its requirements: you need big data, and tremendous computing power.

It means not having to laboriously program a prospective AI with that elusive quality of “intelligence”—however defined. Instead, all the potential for future intelligence and reasoning powers are latent in the program itself, much like an infant’s inchoate but infinitely flexible mind.

Another form of deep learning is speech recognition, which enables the voice assistant in phones to understand questions like, “Hey Siri, How does artificial intelligence work?”

Neural Network | Making associations

A network of lines forming the illustration of a human brain.

Neural networks enable deep learning. As mentioned, neural networks are computer systems modeled after neural connections in the human brain. The artificial equivalent of a human neuron is a perceptron. Just like bundles of neurons create neural networks in the brain, stacks of perceptrons create artificial neural networks in computer systems.Neural networks learn by processing training examples. The best examples come in the form of large data sets, like, say, a set of 1,000 cat photos. By processing the many images i.e. inputs the machine is able to produce a single output, answering the question, “Is the image a cat or not?”This process analyzes data many times to find associations and give meaning to previously undefined data. Through different learning models, like positive reinforcement, the machine is taught it has successfully identified the object.

COGNITIVE COMPUTING

In general, the term cognitive computing has been used to refer to new hardware and/or software that mimics the functioning of the human brain and helps to improve human decision-making.In this sense, CC is a new type of computing with the goal of more accurate models of how the human brain/mind senses, reasons, and responds to stimulus. CC applications link data analysis and adaptive page displays (AUI) to adjust content for a particular type of audience. As such, CC hardware and applications strive to be more affective and more influential by design.

Some features that cognitive systems may express are:

Adaptive : They may learn as information changes, and as goals and requirements evolve. They may resolve ambiguity and can tolerate unpredictability. They may be engineered to feed on dynamic data in real time, or near real time.[11]

Interactive : They may interact easily with users so that those users can define their needs comfortably. They may also interact with other processors, devices as well as with people.

Iterative and stateful : They may aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They may “remember” previous interactions in a process and return information that is suitable for the specific application at that point in time.

Contextual : They may understand, identify, and extract contextual elements such as meaningsyntax, time, location, appropriate domain, regulations, user’s profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).

Natural Language Processing (NLP) | Understanding the language

An illustration of a gearwheel combined with a human head.

Natural Language Processing or NLP, allows computers to interpret, recognize, and produce human language and speech. The ultimate goal of NLP is to enable seamless interaction with the machines we use every day by teaching systems to understand human language in context and produce logical responses.Real-world examples of NLP include Skype Translator that interprets the speech of multiple languages in real-time to facilitate communication.

Computer Vision | Understanding images

A illustration of a robot with a lit lightbulb on its side.

Computer vision is a technique that implements deep learning and pattern identification to interpret the content of an image; i.e. the graphs, tables, and pictures within PDF documents, as well as, other text and video. Computer vision is an integral field of AI, enabling computers to identify, process and interpret visual data.Applications of this technology have already begun to revolutionize industries like research & development and healthcare. Computer Vision is being used to diagnose patients faster by using Computer Vision and machine learning to evaluate patients’ x-ray scans.

WHAT IS THE NEED OF AI?

  • AI automates repetitive learning and discovery through data. But AI is different from hardware-driven, robotic automation. Instead of automating manual tasks, AI performs frequent, high-volume, computerized tasks reliably and without fatigue. For this type of automation, human inquiry is still essential to set up the system and ask the correct questions.
  • AI adds intelligence to existing products. In most cases, AI will not be sold as an individual application. Rather, products you already use will be improved with AI capabilities,just like Siri was added as a feature to a new generation of Apple products. Automation, conversational platforms, bots and smart machines can be combined with large amounts of data to improve many technologies at home and in the workplace, from security intelligence to investment analysis.
  • AI adapts through progressive learning algorithms to let the data do the programming. AI finds structure and regularities in data so that the algorithm acquires a skill: The algorithm becomes a classifier or a predictor. So, just as the algorithm can teach itself how to play chess, it can teach itself what product to recommend next online. And the models adapt when given new data. Back propagation is an AI technique that allows the model to adjust, through training and added data, when the first answer is not right.
  • AI analyzes more and deeper data using neural networks that have many hidden layers. Building a fraud detection system with five hidden layers was almost impossible a few years ago. All that has changed with incredible computer power and big data. You need lots of data to train deep learning models because they learn directly from the data. The more data you can feed them, the more accurate they can become.
  • AI achieves incredible accuracy through deep neural networks – which was previously impossible. For example, your interactions with Alexa, Google Search and Google Photos are all based on deep learning – and they keep getting more accurate the more we use them. In the medical field, AI techniques from deep learning, image classification and object recognition can now be used to find cancer on MRIs with equal accuracy as highly trained radiologists.
  • AI gets the most out of data. When algorithms are self-learning, the data itself can become intellectual property. The answers are in the data; you just have to apply AI to get them out. Since the role of the data is now more important than ever before, it can create a competitive advantage. If you have the best data in a competitive industry, even if everyone is applying similar techniques, the best data will win.

HOW 5G TECHNOLOGY WORKS?

Image result for 5g technology

ABOUT 5G :-5G is the 5th generation of mobile networks, a significant evolution of todays 4G LTE networks.  5G has been designed in order to meet the very large growth in data and connectivity of today’s modern society, the internet of things with billions of connected devices, and tomorrow’s innovations. 5G will initially operate in conjunction with existing 4G networks before evolving to fully standalone networks in subsequent releases and coverage expansions

 It will take a much larger role than previous generations.

5G will elevate the mobile network to not only interconnect people, but also interconnect and control machines, objects, and devices. It will deliver new levels of performance and efficiency that will empower new user experiences and connect new industries. 5G will deliver multi-Gbps peak rates, ultra-low latency, massive capacity, and more uniform user experience.

WORKING

Image result for 5g network architecture

To better understand 5G’s potential, it’s worth quickly reviewing how cell phones work. Cell phones, at their most basic, are essentially two-way radios. They convert your voice into digital data that can be sent video radio waves, and of course, smartphones can send and receive Internet data, too, which is how you’re able to ride a city bus while playing “Flappy Bird” and texting your friends.

A mobile network has two main components i.e. the ‘Radio Access Network’ and the ‘Core Network’.

The Radio Access Network  consists of various types of facilities like small cells, towers, masts and dedicated in-building and home systems that connect mobile users and wireless devices to the main core network.

Small cells will be a major feature of 5G networks particularly at the new millimetre wave (mmWave) frequencies where the connection range is very short. To provide a continuous connection, small cells will be distributed in clusters depending on where users require connection which will complement the macro network that provides wide-area coverage.

5G Macro Cells will use MIMO (multiple input, multiple output) antennas that have multiple elements or connections to send and receive more data simultaneously. The benefit to users is that more people can simultaneously connect to the network and maintain high throughput.  Where MIMO antennas use very large numbers of antenna elements they are often referred to as ‘massive MIMO’, however, the physical size is similar to 3G and 4G base station antennas.

The Core Network – is the mobile exchange and data network that manages all of the mobile voice, data and internet connections. For 5G, the ‘core network’ is being redesigned to better integrate with the internet and cloud based services and also includes distributed servers across the network improving response times i.e. reducing latency.

Many of the advanced features of 5G including network function virtualization and network slicing for different applications and services, will be managed in the core.

Network Slicing  enables a smart way to differentiate the network for a particular industry, business or application. For example emergency services could operate on a network slice independently from other users.

Network Function Virtualization (NVF)  is the ability to instantiate network functions in real time at any desired location within the operator’s cloud platform. Network functions that used to run on dedicated hardware for example a firewall and encryption at business premises can now operate on software on a virtual machine. NVF is crucial to enable the speed efficiency and agility to support new business applications and is an important technology for a 5G core.

millimeter wave

Image result for millimeter wave technology

Radio signals are measured by their wavelengths. The shorter the wavelength, the higher the frequency. 5G signals will use wavelengths (between 30 and 300 gigahertz) that are measured in millimeters.These frequencies are called millimeter waves because they have wavelengths between 1 mm and 10 mm, while the wavelengths of the radio waves currently used by smartphones are mostly several dozen centimeters. That’s why 5G is considered a millimeter wave technology.

The very high frequency of these signals is important to note. It means that 5G will be capable of incredible data bandwidth, so that many people will simultaneously send and receive nearly immeasurable amounts of data.

Advantage of mmWave technology :-

There are two ways to increase the speed of wireless data transmission: increase the spectrum utilization, or increase the spectrum bandwidth. Compared to the first approach, increasing thesecond one i.e. spectrum bandwidth is simpler and more direct. Without changing the spectrum utilization, increasing the available bandwidth several times over can increase data transmission speeds by a similar amount. The problem is that the commonly used frequencies below 5 GHz are already extremely crowded, so where can we find new spectrum resources? 5G’s use of millimeter waves uses the second of the two methods to increase transmission speeds.

Based on communication principles, the maximum signal bandwidth in wireless communication is about 5% of the carrier frequency. Therefore, the higher the carrier frequency, the greater the signal bandwidth. That’s why, among the millimeter-wave frequencies, 28 GHz and 60 GHz are the most promising frequencies for 5G network. The 28 GHz band can provide an available spectrum bandwidth of up to 1 GHz, while each channel in the 60 GHz band can provide an available signal bandwidth of 2 GHz (a total available spectrum of 9 GHz divided between four channels).

Disadvantage of mmWave technology :-

The use of millimeter waves has one major drawback. Millimeter waves are not capable of penetrating structures and other obstacles. Even leaves or rain can absorb these signals. This is also why 5G networks will have to adopt the small base station method to enhance convectional cell tower infrastructure.

Because millimeter waves have high frequencies and short wavelengths, the antennas used to receive them can be smaller, allowing for the construction of small base stations. We can predict that, in the future, 5G mobile communication will no longer depend on the construction of large-scale base stations, but rather many small base stations. This will allow 5G to cover peripheral areas that are not reached by large base stations.

MASSIVE MIMO

Image result for massive MIMO technology

multiple element base station – greater capacity, multiple users, faster data

5G will use ‘massive’ MIMO (multiple input, multiple output) antennas that have very large numbers of antenna elements or connections to send and receive more data simultaneously. The advantage to users is that more people can simultaneously connect to the network and maintain high throughput.
The overall physical size of the 5G massive MIMO antennas will be similar to 4G, however with a higher frequency, the individual antenna element size is smaller allowing more elements (in excess of 100) in the same physical case.  
5G User Equipment including mobile phones and devices will also have MIMO antenna technology built into the device for the mmWave frequencies. 

MIMO – Beam Steering 
Beam steering is a technology that allows the massive MIMO base station antennas to direct the radio signal to the users and devices rather than in all directions. The beam steering technology uses advanced signal processing algorithms to determine the best path for the radio signal to reach the user. This increases efficiency as it reduces interference (unwanted radio signals).

Image result for MIMO beamforming in 5g

Massive MIMO antenna and advanced beam steering optimises EMF and also increases efficieny.

how LED display works?

An LED display is a flat panel display that uses an array of light-emitting diodes as pixels for a display. Their brightness allows them to be used outdoors where they are visible in the sun for store signs and billboards. In recent years, they have also become commonly used in destination signs on public transport vehicles, as well as variable-message signs on highways. LED displays are capable of providing general illumination in addition to visual display, as when used for stage lighting or other decorative (as opposed to informational) purposes. LED displays can offer higher contrast ratios than a projector and are thus an alternative to traditional projection screens, and they can be used for large, uninterrupted (without a visible grid arising from the bezels of individual displays) video walls.

An LED display is basically a LCD display with a modification.Instead of using CCFL(Cold Cathode Fluorescent Lamp) backlight ,it uses an array of LEDs(Light Emitting Diode) as a source of light .Rest all the components are same as that of an LCD like polarized filters,liquid crystal,electrodes.

OLED AND QLED

An OLED display uses a panel of pixel-sized organic compounds that respond to electricity. Since each tiny pixel (millions of which are present in modern displays) can be turned on or off individually, OLED displays are called “emissive” displays (meaning they require no backlight). They offer incredibly deep contrast ratios and better per-pixel accuracy than any other display type .

Because they don’t require a separate light source, OLED displays are also amazingly thin — often just a few millimeters. OLED panels are often found on high-end TVs in place of LED/LCD technology, but that doesn’t mean that LED or LCDs aren’t without their own premium technology.

QLED is a premium tier of LED/LCD TVs . Unlike OLED displays, QLED is not a so-called emissive display technology (QLED pixels are still illuminated by lights from behind). However QLED TVs feature an updated illumination technology over regular LED LCDs in the form of Quantum Dot material (hence the “Q” in QLED), which raises overall efficiency and brightness. This translates to better, brighter grayscale and color, and enhances HDR (High Dynamic Range) abilities.

Full array

Image result for full array backlight

This method is considered the best LED backlight type.

In a full array LED screen, the LEDs are distributed evenly behind the entire screen. This produces a more uniform backlight and provides a more effective use of local dimming, where it can change the luminosity of only a specific part of the screen.

WHAT IS LOCAL DIMMING ?

Local dimming is a feature of LED LCD TVs wherein the LED light source behind the LCD is dimmed and illuminated to match what the picture demands. LCDs can’t completely prevent light from passing through, even during dark scenes, so dimming the light source itself aids in creating deeper blacks and more impressive contrast in the picture. This is accomplished by selectively dimming the LEDs when that particular part of the picture — or region — is intended to be dark.

Image result for local dimming

Local dimming helps LED/LCD TVs more closely match the quality of older Plasma displays  and modern OLED displays, which feature better contrast levels by their nature — something CCFL LCD TVs couldn’t do. The quality of local dimming varies depending on which type of backlighting your LCD uses, how many individual zones of backlighting are employed, and the quality of the processing. Here’s an overview of how effective local dimming is on each type of LCD TV.

Edge lit

Image result for edge  lit  backlight

This is the most common method for LED TVs.

With an edge lit LED screen, the LEDs are placed at the peripheral of the screen. Depending on the television, it can be all around the screen or only on the sides or at the bottom. This allows the screen to be thin.

However, it can cause some spots on the screen to be brighter than others, like the edges. This problem is called flashlighting or clouding. It can be seen while watching a dark scene in a dark environment.

Direct lit

Image result for direct  lit  backlight

Similarly to the full array method, the LEDs are directly behind the screen. But, there are very few of them and they cannot be controlled separately to match the luminosity of the picture.

These TVs are not thin because of the space required behind the screen to add the LEDs and to diffuse the light over a big area

WORKING

working is same as that of LCD display

to know about its working see my previous post https://dignom307027122.wordpress.com/2020/03/21/working-of-lcd-display/

working of LCD display

Image result for lcd display

You probably use items containing an LCD (liquid crystal display) every day. They are all around us — in laptop,computers, digital clocks and watches  and many other electronic devices. LCDs are common because they offer some real advantages over other display technologies. They are thinner and lighter and draw much less power than cathode ray tube (CRTs), for example.

Basics of LCD Displays:-

The liquid-crystal display has the distinct advantage of having a low power consumption than the LED. It is typically of the order of micro watts for the display in comparison to the some order of milli watts for LEDs. Low power consumption requirement has made it compatible with MOS integrated logic circuit. Its other advantages are its low cost, and good contrast. The main drawbacks of LCDs are additional requirement of light source, a limited temperature range of operation (between 0 and 60° C), low reliability, short oper­ating life, poor visibility in low ambient lighting, slow speed and a need for an ac drive.

Basic structure of an LCD

A liquid crystal cell consists of a thin layer (about 10 u m) of a liquid crystal sand­wiched between two glass sheets with transparent elec­trodes deposited on their inside faces. With both glass sheets transparent, the cell is known as transmittive type cell. When one glass is transparent and the other has a reflective coating, the cell is called reflective type. The LCD does not produce any illumination of its own. In fact,it depends entirely on illumination falling on it from an external source for its visual effect.

Working of LCD

working of lcd

Each pixel of an LCD typically consists of a layer of molecules aligned between two transparent electrodes, and two polarizing filters (parallel and perpendicular), the axes of transmission of which are (in most of the cases) perpendicular to each other. Without the liquid crystal between the polarizing filters, light passing through the first filter would be blocked by the second (crossed) polarizer.

What are liquid crystals?

Solids or frozen lumps of matter that stay in the same place all by themselves, often with their atoms packed in a regular arrangement called crystal(or crystalline lattice).

Liquids lack the order of solids and, though they stay put if you keep them in a container, they are always looking for a chance to flow out of that container. Now imagine a substance with some of the order of a solid and some of a liquid. What you get is a liquid crystal—a kind of halfway house in between. At any given moment, liquid crystals can be in one of several possible “substrates” (phases) somewhere in a state between solid and liquid. The two most important liquid crystal phases are called nematic and smectic

Image result for liquid crystal
  • When they’re in the nematic phase, liquid crystals are a bit like a liquid: their molecules can move around and shuffle past one another, but they all point in broadly the same direction. They’re a bit like matches in a matchbox: you can shake them and move them about but they all keep pointing the same way.
  • If you cool liquid crystals, they shift over to the smectic phase. Now the molecules form into layers that can slide past one another relatively easily. The molecules in a given layer can move about within it, but they can’t and don’t move into the other layers.

One feature of liquid crystals is that they’re affected by electric current. A particular sort of nematic liquid crystal, called twisted nematics (TN), is naturally twisted. Applying an electric current to these liquid crystals will untwist them to varying degrees, depending on the voltage of the current. LCDs use these liquid crystals because they react predictably to electric current in such a way as to control light passage.

Most liquid crystal molecules are rod-shaped and are broadly categorized as  thermotropic or lyotropic.

­Thermotropic liquid crystals will react to changes in temperature or, in some cases, pressure. The reaction of lyotropic liquid crystals, which are used in the manufacture of soaps and detergents, depends on the type of solvent they are mixed with. Thermotropic liquid crystals are either isotropic or nematic. The key difference is that the molecules in isotropic liquid crystal substances are random arranged , while nematics have a definite order or pattern.

Transmissive Displays

Transmissive LCD Display Working

You can easily understand the working of transmissive LCD display from the above illustration of a segment. At the left side we can see a light source which is emitting unpolarized light. When it passes through the rear polarizer (say vertical polarizer), the light will become vertically polarized. Then this light enters to the liquid crystal. As we seen before, liquid crystal will twist the polarization if it is ON. So when the vertically polarized light passes through ON liquid crystal segment, it becomes horizontally polarized. Next is front polarizer (say vertical polarizer), which will block horizontally polarized light. So that segment will appear as dark for the observer. If the liquid crystal segment is OFF, it will not change the polarization of light, so it will remain vertically polarized. So the front polarizer will pass that light. So it will appear as bright for the observer.

This displays allows the use of back lights, commonly known as Backlit LCD. We can also use ambient light as the source as used in the device shown below

Transmissive LCD Display - Clock

Reflective displays

Reflective LCD Display Working

Generally calculators uses this type of display. The working is similar to the transmissive displays except that the light source and the observer are in the same side. There is a reflector on the other side which will reflect back the light from the front side. You can easily understand the working if you understand the working for the transmissive type.

Transflective Displays

As the name indicates it is a combination of Transmissive and Reflective displays. It reflects some light back to the observer to make the display visible during good ambient light conditions. And this display uses back light which can be used during bad ambient light conditions.

laser keyboards

The laser keyboard and mouse are the ultimate technology when it comes to computer science. The wireless laser keyboard and mouse are virtual parts of peripheral hardware that can be projected and touched on any type of surfaces. This type of keyboard is also called projection keyboard. This technology is able to record the finger movements and translate them into keystrokes in the device, although there are no physical keys.
The laser keyboard projection is sort of an image of a real keyboard but which does not really exist i.e. doesn’t has any dimension. Most systems may be used as a virtual mouse or virtual piano.
This is how these devices work: the visible virtual keyboard is projected onto the surface by a laser or with some systems or by a beamer. The projecting device is equipped with a sensor or sensorial camera that picks up the finger movements and touches and when these are detected, they are transformed into actions or characters.
Some systems use an invisible infrared as a second beam to create a wireless laser keyboard projection. This type of product projects an invisible infrared beam above the virtual keyboard on which one uses the fingers to make keystrokes on the projected keyboard. The beam is equipped with sensors and cameras that translate the place when the infrared light was broken into characters

The virtual keyboard and mouse is the only computer peripheral part that works in complete darkness and it is suitable for Blackberries, Smartphones, PDAs, and Mac & Tablet PCs. The device projecting the keyboard comes in different sizes but they are normally the size of a small mobile phone, 90 x 34 x 24 mm. They have the advantage that users can write texts or e-mails easier and faster, just as with a usual keyboard and that when it is turned off it disappears suddenly and completely.

A projection keyboard generally works as follows :

  1. laser or beamer projects visible virtual keyboard onto level surface.
  2. sensor or camera in the projector picks up the  finger movements.
  3. detected co-ordinates determine actions or characters to be generated.

Some devices use a second (invisible infrared) beam:

  1. An invisible infrared beam is projected above the virtual keyboard.
  2. Finger makes keystroke on virtual keyboard and this breaks infrared beam and infrared light is reflected back to projector.
  3. Reflected infrared beam passes through an infrared filter to camera.
  4. Camera photographs angle of incoming infrared light.
  5. Sensor chip determines where infrared beam was broken or disturbed.
  6. detected coordinates determine actions or characters to be generated.

The laser keyboard uses laser and infra-red technology to create the virtual keyboard and to project the hologram of a keyboard on a flat surface.
The projection is realized in four main steps and via three modules: projection module, sensor module and illumination module. The main devices and technologies used to project the hologram are a diffractive optical element, red laser diode, CMOS camera and sensor chip and an infrared (IR) laser diode

When a user presses a “virtual key”, the reflected laser light will be captured by the camera and a signal processing software installed on the PC/Mac performs all the critical jobs: recognizing the user’s finger top, perform distance measurement and mapping the position of the user finger top to the related key character.

Making a virtual laser keyboard

1> The Laser Projection Virtual Keyboard Designed

You need:

Image camera:

Keyboard projection laser:

infrared filter:

linear laser:

2> How to Work

How to Work

At the bottom of the infrared laser emission from a surface of a covering space in the infrared range, of course this plane to cover the entire keyboard, a keyboard in the center of the keyboard is the projection contour shape of the keyboard is mainly used for calibration, at the top of the real-time camera outside the graphics and pass data to the computer, because the laser light is a horizontal and parallel so no object occlusion camera is unable to detect the infrared signal, but if there are objects in the infrared laser region when the occluded objects’ surface will be covered with the infrared camera will detect the infrared signal, computer access to the signal sent by the camera after a certain the algorithm to obtain coordinates in the picture of infrared spot and then put the coordinate mapping to real keyboard position so as to realize its function.

3> choosing a camera lens

Choose Camera Lens
Choose Camera Lens

4> calibrating the camera lens

Calibration the Camera Lens

The picture is taken by using the angle of view 150 camera is a distortion, and in order to Calibration the distortion of the picture using Matlab camera Calibration tool ‘Calibration Toolbox Camera’.

Step 5: The Camera Calibration Toolbox

The Camera Calibration Toolbox
The Camera Calibration Toolbox

Although “the camera calibration toolbox” as long as 3 different angles to shoot photos can be a good camera calibration, but in this design used 9 different angles to shoot photos of the camera calibration the calibration parameters, the parameters are applied to correct algorithm in real-time operation can be real-time correction of distortion of the picture.

6> Lens Process

Lens Process

The PC camera is visible and infrared detection is not allowed to enter because the infrared light in practice may cause the whole picture of the color fidelity, PC camera in the design of the product design manufacturers often use infrared filters in the infrared light environment, in this project need to detect the infrared signal is so PC camera behind add a layer of infrared filter, this filter can filter out visible light infrared light and visible light are only allowed to enter, because of the use of this system is the 980nm infrared laser, so the permeability of 980nm filter, this filter can filter the 980nm light for more than 980nm light has good permeabilty.

Step 7: Why Choose Linear Laser

Why Choose Linear Laser
Why Choose Linear Laser

Select the 980nm infrared linear laser as the source of the signal detection, the linear laser can cover all the keyboard range.

8> PC Software

PC Software

9> programming and algorithm

Programming and Algorithm
Programming and Algorithm

Use cvCaptureFromCAM () function and cvQueryFrame () function to get the camera image,Using the cvCvtColor () function binarize the image,Use findContours () function to find the object contour, drawContours () function to draw the object contour, boundingRect () function to draw the object contour of the rectangle

how sensors work?

Many modern IoT applications rely on sensors for added safety and security, and to easily identify users. Sensors are used in smartphones and other wearables, as well as in smart industry and smart home applications for entry identification and data security.

There are also many types of sensors. Some of these are as follows:

  • optical sensor
  • Capacitive
  • Mechanical
  • Thermal
  • Dynamic output

optical sensor

Definition: The method of sensing light rays is called optical sensing. The sensor type used for optical sensing is known as optical sensor.

Optical Sensor converts light rays to electrical signal. This is similar to the function performed by photoresistor. Let us understand working operation of optical sensor.

Optical Sensor Working Operation

optical sensing using optical sensor

In general, there are two components in optical sensing viz. transmitter (i.e. optical source) and receiver (optical detector). The concept is depicted in the figure with the example of optical fiber. As shown light beam changes its parameters when any object comes in between transmitter and receiver. There are five useful parameters of light which are being measured in optical sensing -intensity, phase, wavelength, polarization, spectral distribution.

Due to advent of optical sensing technology, following physical and chemical measurands can be measured. They are • Temperature, • flow, • pressure, • displacement, • liquid level, •  vibration, • rotation, • acceleration, • magnetic fields, •  force, • Ph, • radiation, • chemical species, •  humidity, • strain, • electric fields, • velocity, • acoustic field .

Optical Sensor Types

below are the optical sensor types based on different characteristics.
• Point sensor, Distributed sensor
• Extrinsic sensor, Intrinsic sensor
• Through Beam Sensor, Diffuse reflective Sensor, Retro-reflective sensor

Point sensor vs Distributed Sensor

Based on working operation optical sensor types are divided into Point sensor and distributed sensor. In Point sensor type, sensor operates on single point. In point sensor type, transducers are placed at the end of optical fiber. Example of this type is fiber Bragg grating which is spread across optical fiber length. They are used to measure temperature or strain. This single point method of optical sensing uses phase change for activation of sensor. In distributed sensor type, sensor operates over distribution of points. In this method sensor is reactive along long series of sensors or optical array.

Extrinsic sensor vs Intrinsic Sensor

There are two types of optical sensors based on where light beam is changed for sensing. If light beam leaves the optical fiber cable and it is changed before it continues on its path till optical detector, then it is known as extrinsic optical sensor. If light beam does not leave the optical fiber cable and it is changed inside the cable itself, then it is known as intrinsic optical sensor. Intensity based fiber optic pressure sensor used to measure pressure between two plates is referred as intrinsic optical sensor.

Through Beam sensor vs Diffuse Reflective Sensor vs Retro Reflective Sensor

Through beam vs diffuse Reflective vs retro reflective sensor types

Based on method of optical sensing and placement of optical transmitter and receiver there are three optical sensor types viz. Through Beam, Reflective and Retro-reflective.
In “Through Beam Sensor” , both transmitter and receiver are placed pointing to each other so that they create straight light beam path. When any object comes in between them, intensity of light changes and accordingly object can be detected.

In “Reflective sensor” , both transmitter and receiver are parallel to each other. The light transmitted by transmitter is reflected by the object and this reflection of light is measured by the receiver. This type of sensor has drawback to differentiate between red and white light when red color LED is used as optical source. This is due to the fact that both red and white color has same amount of reflection.
In “Retro-reflective type”, both transmitter and receiver are placed in one housing and reflector made of special reflective material is used. Transmitter transmits light beam which is reflected by reflector and received by the receiver. If any object comes in between this beam path, it breaks. Based on difference between light beam intensity and other parameters object can be detected or sensed at the receiver.

Capacitive scanners

The most commonly found fingerprint scanner used today is the capacitive scanner. You’ll find this type of scanner inside most smartphones these days, as it’s the most secure. Again the name gives away the core component, providing you’re familiar with a little electronics, the capacitor.

Instead of creating a traditional image of a fingerprint, capacitive fingerprint scanners use tiny capacitor circuits to collect data about a fingerprint. As capacitors can store electrical charge, connecting them up to conductive plates on the surface of the scanner allows them to be used to track the details of a fingerprint. The charge stored in the capacitor will be changed slightly when a finger’s ridge is placed over the conductive plates, while an air gap will leave the charge at the capacitor relatively unchanged. An op-amp integrator circuit is used to track these changes, which can then be recorded by an analogue-to-digital converter.Capacitive Fingerprint Scanner design

The theory and architecture behind a capacitive fingerprint scanning chip.

Once captured, this digital data can be analyzed to look for distinctive and unique fingerprint attributes, which can be saved for comparison . What is particularly smart about this design is that it is much tougher to fool than an optical scanner. The results can’t be replicated with an image and is incredibly tough to fool with some sort of prosthetic, as different materials will record slightly different changes in charge at the capacitor. The only real security risks come from either hardware or software hacking.

Creating a large enough array of these capacitors, typically hundreds if not thousands in a single scanner, allows for a highly detailed image of the ridges and valleys of a fingerprint to be created from nothing more than electrical signals. Just like the optical scanner, more capacitors results in a higher resolution scanner, increasing the level of security, up to a certain point.

These features make OnePlus, Vivo, Xiaomi’s in-display fingerprint sensors stand out

Due to the larger number of components in the detection circuit, capacitive scanners had previously been quite pricey. Some early implementations attempted to cut the number of capacitors needed by using “swipe” scanners, which would collect data from a smaller number of capacitor components by quickly refreshing the results as a finger is pulled over the sensor. As many consumers complained at the time, this method was very finicky and often required several attempts to scan the result correctly. Fortunately, these days, the simple press and hold design is far more common.

You can do more than just read fingerprints with these scanners, newer models sport gesture and swipe functionality too. These can be used as soft button support to act as navigation keys, force sensing capabilities, or as a way to interact with other UI elements. A number of higher-end smartphones support a wider variety of swipe and navigation features using their fingerprint scanners.

Mechanical Sensor

The basic principle of mechanical sensors relies on the mechanical deformation of a device which is translated to an electrical signal. The mechanical deformation can be measured in a number of ways, such as piezoelectricity, change in the electric resistance with the geometry, change in the electric capacity, and changes in the resonant frequency of vibrating systems.

Bending Sensors

A mechanical sensor has been described that is manufactured from a polymer film. Its upper part is modified to be electrically conductive, but its lower part remains as an insulator. When a strain is applied to the film, the mechanical sensor distorts. The electrical resistance of the upper part changes. In this way, the strain can be measured.

The polymer used can be a poly(imide), a poly(phenylquinoxaline) or a poly(phenylene sulfide) (4). The film can be irradiated by an ion beam via a mask to form patterns of conductive lines, aligned with the direction in which the sensor will distort during use. The conductive lines can also be produced by reactive ionic etching or photoablation, by using an excimer laser.

Thermal Sensor

 Temperature sensing can be done either through direct contact with the heating source, or remotely, without direct contact with the source using radiated energy instead. There are a wide variety of temperature sensors on the market today, including Thermocouples, Resistance Temperature Detectors (RTDs), Thermistors, Infrared, and Semiconductor Sensors.

5 Types of Temperature Sensors  

  • Thermocouple: It is a type of temperature sensor, which is made by joining two dissimilar metals at one end. The joined end is referred to as the HOT JUNCTION. The other end of these dissimilar metals is referred to as the COLD JUNCTION. The cold junction is actually formed at the last point of thermocouple material. If there is a difference in temperature between the hot junction and cold junction, a small voltage is created. This voltage is referred to as an EMF (electro-motive force) and can be measured and in turn used to indicate temperature.
Thermocouple
Thermocouple
  • The RTD is a temperature sensing device whose resistance changes with temperature. Typically built from platinum, though devices made from nickel or copper are not uncommon, RTDs can take many different shapes like wire wound, thin film. To measure the resistance across an RTD, apply a constant current, measure the resulting voltage, and determine the RTD resistance. RTDs exhibit fairly linear resistance to temperature curves over their operating regions, and any nonlinearity are highly predictable and repeatable. The PT100 RTD evaluation board uses surface mount RTD to measure temperature. An external 2, 3 or 4-wire PT100 can also be associated with measure temperature in remote areas. The RTDs are biased using a constant current source. So as to reduce self-heat due to power dissipation, the current magnitude is moderately low.
  • Thermistors: Similar to the RTD, the thermistor is a temperature sensing device whose resistance changes with temperature. Thermistors, however, are made from semiconductor materials. Resistance is determined in the same manner as the RTD, but thermistors exhibit a highly nonlinear resistance vs. temperature curve. Thus, in the thermistors operating range we can see a large resistance change for a very small temperature change. This makes for a highly sensitive device.
  • Semiconductor sensors: They are classified into different types like Voltage output, Current output, Digital output, Resistance output silicon and Diode temperature sensors. Advanced semiconductor temperature sensors offer high accuracy and high linearity over an operating range of about 55°C to +150°C. Internal amplifiers can scale the output to convenient values, such as 10mV/°C. They are also useful in cold-junction compensation circuits for wide temperature range thermocouples.
  • Digital Temperature Sensors : these sensors eliminate the necessity for extra components, like a A/D converter, within the application and there is no need to calibrate components or the system at specific reference temperatures as needed when utilizing thermistors. Digital temperature sensors deal with everything, empowering the basic system temperature monitoring function to be simplified.

how internet works ?

When you chat to somebody on the Net , do you ever stop to think how many different computers you are using in the process? There’s the computer on your own desk, of course, and another one at the other end where the other person is sitting, ready to communicate with you. But in between your two machines, making communication between them possible, there are probably about a dozen other computers bridging the gap. Collectively, all the world’s linked-up computers are called the Internet. How do they talk to one another? Let’s take a closer look!

FIRST OF ALL WHAT IS INTERNET ?

The Internet (portmanteau of interconnected network) is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to link devices all around the world. Internet is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mailtelephony, and file sharing.

WORKING OF INTERNET

The Internet is based on the concept of a client-server relationship between computers, also called client/server architecture. In a client/server architecture, some computers act as information providers (servers), while other computers act as information receivers (clients). The client/server architecture is not one-to-one-that is, a single client computer may access many different servers, and a single server may be accessed by a number of different client computers. Before the mid-1990s, servers were usually very powerful computers such as mainframe or supercomputers, with extremely high processing speeds and large amounts of memory. Personal computers and workstations, however, are now capable of acting as Internet servers due to advances in computing technology. A client computer is any computer that receives information from a server. A client computer may be a personal computer, a pared-down computer (sometimes called a Web appliance), or a wireless device such as a handheld computer or a cellular phone.

To access information on the Internet, the user must first connect, to the client computer’s host network. A host network is a network that the client computer is part of, and is usually a local area network (LAN). Once a connection has been established, the user may request information from a remote server. If the information requested by the user resides on one of the computers on the host network, that information is quickly retrieved and sent to the user’s terminal. If the information requested by the user is on a server that does not belong to the host LAN, then the host network connects to other networks until it makes a connection with the network containing the requested server. In the process of connecting to other networks, the host may need to access a router, a device that determines the best connection path between networks and helps networks to make connections.

Once the client computer establishes a connection with the server containing the requested information, the server sends the information to the client in the form of a file. A special computer program called a web browser or internet browser enables the user to view the file. Examples of Internet browsers are Mosaic, Netscape, and Internet Explorer. Multimedia files can only be viewed with a browser. Their pared-down counterparts, text-only documents, can be viewed without browsers. Many files are available in both multimedia and text-only versions. The process of retrieving files from a remote server to the user’s terminal is called downloading.

ROLE OF ISP(Internet Service Provider)

To be able to connect to the internet, we must have access by subscribing to the internet service provider.
ISP is a company that offers services to us to connect with the internet. To access the Internet, we simply call the ISP via the ISP’s modem and the computer will take care of the details necessary to connect to the Internet, including the cost of the connection.

So, for example, you are accessing the homepage abroad, then the ISP who had to bear the costs of foreign relations.
We just pay the local pulse is used to contact the ISP.

Internet Service Provider is a company or entity that organizes the internet connection services and other related services.
Most telephone companies are Internet service providers. They provide services such as connection to the Internet, domain name registration, and hosting,etc.

ISP has a network both domestically and abroad so that the customer or the user of the connection provided by the ISP to connect to the global Internet network.
Here in the form of network transmission medium that can stream data can be either wired (modem, leased line, and broadband), radio, ect.

Choice of ISP

Typically, ISPs implement a monthly fee to the customers. This relationship is usually divided into two categories:Modem (“dial-up”) 

1> modem”dial-up”

2 > Broadband.

Dial-up connection is now widely preferred and offered for free or at a low price and require the use of ordinary telephone wires. Relationships can be broadband ISDN, non-cable, cable modem, DSL, satellite.
Broadband compared modem has a much faster speed and always “on”, but more expensive.

ROLE OF OPTIC FIBER CABLES IN INTERNET COMMUNICATION

An optical fiber is a flexible, transparent fiber made by drawing glass (silica) or plastic to a diameter slightly thicker than that of a human hair. Optical fibers are used most often as a means to transmit light[a] between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths (data rates) than electrical cables. Fibers are used instead of metal wires because signals travel along them with less loss; in addition, fibers are immune to electromagnetic interference, a problem from which metal wires suffer.Fibers are also used for illumination and imaging, and are often wrapped in bundles so they may be used to carry light into, or images out of confined spaces, as in the case of a fiberscope.

Optical fibers typically include a core surrounded by a transparent cladding material with a lower index of refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as a waveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers, while those that support a single mode are called single-mode fibers (SMF). Multi-mode fibers normally have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted Single-mode fibers are used for most communication links longer than 1,000 meters (3,300 ft).

Being able to join optical fibers with low loss is important in optic fiber communication. This is more complex than joining electrical wire or cable and involves careful cleaving of the fibers, precise alignment of the fiber cores, and the coupling of these aligned cores. For applications that demand a permanent connection a fusion splice is common. In this technique, an electric arc is used to melt the ends of the fibers together. Another common technique is a mechanical splice, where the ends of the fibers are held in contact by mechanical force. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors.

Advantages of Using Fiber Optic Cables

Fiber cables has several advantages over long-distance copper cabling.

  • Fiber optics support a higher capacity. The amount of network bandwidth a fiber cable can carry easily exceeds that of a copper cable with similar thickness. Fiber cables rated at 10 Gbps, 40 Gbps, and 100 Gbps are standard.
  • Because light can travel for much longer distances over a fiber cable without losing its strength, the need for signal boosters is reduced.
  • A fiber optic cable is less susceptible to interference. A copper network cable requires shielding to protect it from electromagnetic interference. While this shielding helps, it is not sufficient to prevent interference when many cables are strung together in proximity to one another. The physical properties of fiber optic cables avoid most of these problems.

how earbuds work ?

Bluetooth earbuds connect through a specialized type of wireless network to your telephone. Using technology developed specifically for the purpose of eliminating the need for the unsightly and ungainly wires that once needed to be used to connect a headset to a telephone, Bluetooth headsets enable you to speak and hear through an earpiece while leaving your hands free. The technology involved in a headset helps ensure that their use is safe and high-quality, all while maintaining the security of your telephone. Bluetooth dates back to 1998, and the first Bluetooth earbuds using this technology was shipped in 2000.

A Bluetooth device emits low power radio transmission signals in Ultra High Frequency band (UHF). These signals can travel up to 10 meters and more. Bluetooth devices do not require line-of-sight positioning between devices in order to communicate. Bluetooth devices use Ultra High Frequency band wherein signals travel free from interferences of low frequency signals transmitted by radio, TV, etc. Using Bluetooth technology, a device can connect simultaneously with up to 8 devices within 10 meters range. To avoid interferences, this technology uses spread-spectrum frequency hopping technique. Bluetooth system creates a Personal Area Network (PAN), or piconet which is the base of this technology. Devices within the piconet can communicate by transmitting & receiving data. Bluetooth sharing is not always secure as the transmissions are sent in open. People with malicious intents may eavesdrop the data transmitted.

So first lets know the inside of an bluetooth earbud .Its basically a speaker,microphone,battery and a circuit contained inside a plastic case. The circuit can be regarded as the control unit .

Bluetooth Network

A connected Bluetooth earbud becomes part of a special, localized wireless network. The earbud acts as both the transmitter and receiver of the wireless signal. Bluetooth’s signal itself is different from similar wireless signals because it consists of radio waves configured using a complex algorithm to ensure clear reception and transmission between your earbud and your phone. The result is clear sounds coming into your ear, and clear speech transmitted through your microphone. Bluetooth wireless networks require a parent device, which in the case of an earbud is your telephone. Multiple devices can be connected to a single Bluetooth parent device. However, those devices, like your earbud, can’t communicate with each other but can only send and receive signals from the parent phone. The particular configuration of a Bluetooth network limits the total number of peripherals — such as a headset — that can be connected to a master device to seven

Pairing

For an earbud to communicate information to the telephone, it needs to be paired with the handset. Pairing is the term used to describe the process when an information link is created between a Bluetooth accessory and parent device. For pairing to work, both devices must have Bluetooth pairing turned on and be set to a discoverable mode. Earbuds have a Bluetooth password PIN number that you must enter into the phone to activate the pairing. This tells the phone to authorize the creation of the connection.

CHARGING THE BUDS AND THE CASE

Charge the earbuds

Put the earbuds into the case and press them firmly until the earbud lights turn on. Twist the case until it’s completely closed.

Tip: Earbuds are designed to be a snug fit in the case. You may need to push earbuds from the side to remove.

Charge the case

Charge the case by plugging in the charging cable. The case light turns on when it’s charging.

Battery Backup Of An Earbud

so basically the average earbuds have a battery backup of upto 3 hours depending on the task you are doing.If you are using it for listening music then it may decrease to 2 or 2.5 hours and if you are using it for talking then it may give you a talking time of around 1.5 hours.

A quality earbuds can give you a battery backup of upto 5 hours

The case of an average earbud can charge the earbud about 6-7 times

Design a site like this with WordPress.com
Get started