An Introduction To Machine Learning and AI.

  • Avatar
  • February 23, 2020
  • 50 0


People define intelligence in many different ways. It’s the ability to acquire and apply knowledge and skills. However, you can say that intelligence involves certain mental activities composed of the following activities:

» Learning: Having the ability to obtain and process new information.

» Reasoning: Being able to manipulate information in various ways.

» Understanding: Considering the result of information manipulation.

» Grasping truths: Determining the validity of the manipulated information.

» Seeing relationships: Divining how validated data interacts with other data.

» Considering meanings: Applying truths to particular situations in a manner consistent with their relationship.

» Separating fact from belief: Determining whether the data is adequately supported by provable sources that can be demonstrated to be consistently valid.

Intelligence often follows a process that a computer system can mimic as part of a simulation:

1. Set a goal based on needs or wants.

2. Assess the value of any currently known information in support of the goal.

3. Gather additional information that could support the goal.

Artificial intelligence (AI)

Artificial intelligence (AI) is truly a revolutionary feat of computer science, set to become a core component of all modern software over the coming years and decades. It is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.
Machine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model based on sample data, known as “training data”, in order to make predictions or decisions without being explicitly programmed to perform the task. Machine learning is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, practical speech recognition, effective web search, and a vastly improved understanding of the human genome. Machine learning is so pervasive today that you probably use it dozens of times a day without knowing it. Many researchers also think it is the best way to make progress towards human-level AI.
Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention.
 Machine learning algorithms are used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible to develop a conventional algorithm for effectively performing the task.
Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning. In its application across business problems, machine learning is also referred to as predictive analytics. As it is evident from the name, it gives the computer that makes it more similar to humans: The ability to learn. Machine learning is actively being used today, perhaps in many more places than one would expect.

» Acting humanly: When a computer acts like a human, it best reflects the Turing
test, in which the computer succeeds when differentiation between the computer
and a human isn’t possible. This category also reflects what the media would have you
believe AI is all about. You see it employed for technologies such as natural
language processing, knowledge representation, automated reasoning, and
machine learning (all four of which must be present to pass the test).
The original Turing Test didn’t include any physical contact. The newer, Total
Turing Test does include physical contact in the form of perceptual ability
interrogation, which means that the computer must also employ both
computer vision and robotics to succeed. Modern techniques include the idea
of achieving the goal rather than mimicking humans completely. For example,
the Wright Brothers didn’t succeed in creating an airplane by precisely copying
the flight of birds; rather, the birds provided ideas that led to aerodynamics
that eventually led to human flight. The goal is to fly. Both birds and humans
achieve this goal, but they use different approaches.

Thinking humanly:  When a computer thinks as a human, it performs tasks
that require intelligence (as contrasted with rote procedures) from a human
to succeed, such as driving a car. To determine whether a program thinks like
a human, you must have some method of determining how humans think,
which the cognitive modeling approach defines. This model relies on three
• Introspection: Detecting and documenting the techniques used to achieve
goals by monitoring one’s own thought processes.
Psychological testing: Observing a person’s behavior and adding it to a
database of similar behaviors from other persons given a similar set of
circumstances, goals, resources, and environmental conditions (among
other things).
• Brain imaging: Monitoring brain activity directly through various mechanical means, such as Computerized Axial Tomography (CAT), Positron
Emission Tomography (PET), Magnetic Resonance Imaging (MRI), and
Magnetoencephalography (MEG).

After creating a model, you can write a program that simulates the model.
Given the amount of variability among human thought processes and the
difficulty of accurately representing these thought processes as part of a
program, the results are experimental at best. This category of thinking
humanly is often used in psychology and other fields in which modeling the
human thought process to create realistic simulations is essential.
» Thinking rationally: Studying how humans think using some standard
enables the creation of guidelines that describe typical human behaviors. A
person is considered rational when following these behaviors within certain
levels of deviation. A computer that thinks rationally relies on the recorded
behaviors to create a guide as to how to interact with an environment based
on the data at hand. The goal of this approach is to solve problems logically,
when possible. In many cases, this approach would enable the creation of a
baseline technique for solving a problem, which would then be modified to
actually solve the problem. In other words, the solving of a problem in
principle is often different from solving it in practice, but you still need a
starting point.
» Acting rationally: Studying how humans act in given situations under specific
constraints enables you to determine which techniques are both efficient and
effective. A computer that acts rationally relies on the recorded actions to
interact with an environment based on conditions, environmental factors, and
existing data. As with rational thought, rational acts depend on a solution in
principle, which may not prove useful in practice. However, rational acts do
provide a baseline upon which a computer can begin negotiating the successful completion of a goal.

Human processes differ from rational processes in their outcome. A process is rational
if it always does the right thing based on the current information, given an ideal
performance measure. In short, rational processes go by the book and assume that the
book is actually correct. Human processes involve instinct, intuition, and other variables
that don’t necessarily reflect the book and may not even consider the existing data. As
an example, the rational way to drive a car is to always follow the laws. However, traffic
isn’t rational. If you follow the laws precisely, you end up stuck somewhere because
other drivers aren’t following the laws precisely. To be successful, a self-driving car must
therefore act humanly, rather than rationally.

» Reactive machines: The machines you see beating humans at chess or playing
on game shows are examples of reactive machines. A reactive machine has no
memory or experience upon which to base a decision. Instead, it relies on pure
computational power and smart algorithms to recreate every decision every
time. This is an example of a weak AI used for a specific purpose.
» Limited memory: A self-driving car or autonomous robot can’t afford the time
to make every decision from scratch. These machines rely on a small amount of
memory to provide experiential knowledge of various situations. When the
machine sees the same situation, it can rely on experience to reduce reaction
time and to provide more resources for making new decisions that haven’t yet
been made. This is an example of the current level of strong AI.
» Theory of mind: A machine that can assess both its required goals and the
potential goals of other entities in the same environment has a kind of
understanding that is feasible to some extent today, but not in any commercial form. However, for self-driving cars to become truly autonomous, this
level of AI must be fully developed. A self-driving car would not only need to
know that it must go from one point to another, but also intuit the potentially
conflicting goals of drivers around it and react accordingly.
» Self-awareness: This is the sort of AI that you see in movies. However, it
requires technologies that aren’t even remotely possible now because such a
machine would have a sense of both self and consciousness. In addition,
instead of merely intuiting the goals of others based on environment and
other entity reactions, this type of machine would be able to infer the intent of
others based on experiential knowledge.

Considering AI Uses

You find AI used in a great many applications today. The only problem is that the
technology works so well that you don’t know that it even exists. In fact, you
might be surprised to find that many devices in your home already make use of
AI. For example, some smart thermostats automatically create schedules for you
based on how you manually control the temperature. Likewise, voice input that is
used to control some devices learns how you speak so that it can better interact
with you. AI definitely appears in your car and most especially in the workplace.
In fact, the uses for AI number in the millions — all safely out of sight even when
they’re quite dramatic in nature. Here are just a few of the ways in which you
might see AI used:
» Fraud detection: You get a call from your credit card company asking
whether you made a particular purchase. The credit card company isn’t being
nosy; it’s simply alerting you to the fact that someone else could be making a
purchase using your card. The AI embedded within the credit card company’s
code detected an unfamiliar spending pattern and alerted someone to it.

Resource scheduling: Many organizations need to schedule the use of
resources efficiently. For example, a hospital may have to determine where to
put a patient based on the patient’s needs, availability of skilled experts, and
the amount of time the doctor expects the patient to be in the hospital.
» Complex analysis: Humans often need help with complex analysis because
there are literally too many factors to consider. For example, the same set of
symptoms could indicate more than one problem. A doctor or other expert
might need help making a diagnosis in a timely manner to save a patient’s life.
» Automation: Any form of automation can benefit from the addition of AI to
handle unexpected changes or events. A problem with some types of
automation today is that an unexpected event, such as an object in the wrong
place, can actually cause the automation to stop. Adding AI to the automation
can allow the automation to handle unexpected events and continue as if
nothing happened.
» Customer service: The customer service line you call today may not even
have a human behind it. The automation is good enough to follow scripts and
use various resources to handle the vast majority of your questions. With
good voice inflection (provided by AI as well), you may not even be able to tell
that you’re talking with a computer.
» Safety systems: Many of the safety systems found in machines of various
sorts today rely on AI to take over the vehicle in a time of crisis. For example,
many automatic braking systems rely on AI to stop the car based on all the
inputs that a vehicle can provide, such as the direction of a skid.
» Machine efficiency: AI can help control a machine in such a manner as to
obtain maximum efficiency. The AI controls the use of resources so that the
system doesn’t overshoot speed or other goals. Every ounce of power is used
precisely as needed to provide the desired service.

Avoiding AI Hype

You may have heard of something called the singularity, which is responsible for the potential claims presented in the media and movies. The singularity is essentially a master algorithm that encompasses all five tribes of learning used within machine learning. To achieve what these sources are telling you, the machine must be able to learn as a human would — as specified by the seven kinds of intelligence discussed in the “Discerning intelligence” section, early in the chapter.

Here are the five tribes of learning:
» Symbolists: The origin of this tribe is in logic and philosophy. This group relies
on inverse deduction to solve problems.
» Connectionists: This tribe’s origin is in neuroscience and the group relies on
backpropagation to solve problems.
» Evolutionaries: The evolutionaries tribe originates in evolutionary biology,
relying on genetic programming to solve problems.
» Bayesians: This tribe’s origin is in statistics and relies on probabilistic inference to solve problems.
» Analogizers: The origin of this tribe is in psychology. The group relies on
kernel machines to solve problems.
The ultimate goal of machine learning is to combine the technologies and strategies embraced by the five tribes to create a single algorithm (the master algorithm)
that can learn anything.

Connecting AI to the Underlying Computer

To see AI at work, you need to have some sort of computing system, an application
that contains the required software, and a knowledge base. The computing system
could be anything with a chip inside; in fact, a smartphone does just as well as a
desktop computer for some applications. Of course, if you’re Amazon and you
want to provide advice on a particular person’s next buying decision, the smartphone won’t do — you need a really big computing system for that application.

The size of the computing system is directly proportional to the amount of work
you expect the AI to perform.
The application can also vary in size, complexity, and even location. For example,
if you’re a business and want to analyze client data to determine how best to make
a sales pitch, you might rely on a server-based application to perform the task.
On the other hand, if you’re a customer and want to find products on Amazon to
go with your current purchase items, the application doesn’t even reside on your
computer; you access it through a web-based application located on Amazon’s
The knowledge base varies in location and size as well. The more complex the
data, the more you can obtain from it, but the more you need to manipulate it as
well. You get no free lunch when it comes to knowledge management. The
interplay between location and time is also important. A network connection
affords you access to a large knowledge base online but costs you in time because
of the latency of network connections. However, localized databases, while fast,
tend to lack details in many cases.

Defining the Role of Data

There is nothing new about data. Every interesting application ever written for a computer has data associated with it. Data comes in many forms —some organized, some not. What haschanged is the amount of data. Somepeople find it almost terrifying that we now have access to so much data that details nearly every aspect of most people’s lives, sometimes to a level that even the person doesn’t realize. In addition, the use of advanced hardware and improvements in algorithms make data the universal resource for AI today. To work with data, you must first obtain it. Today, applications collect data manually, as done in the past, and also automatically, using new methods. However, it’s not a matter of just one to two data collection techniques; collection methods take place on a continuum from fully manual to fully automatic. Raw data doesn’t usually work well for analysis purposes. This chapter also helps you understand the need for manipulating and shaping the data so that it meets specific requirements. You also discover the need to define the truth value of the data to ensure that analysis outcomes match the goals set for applications in thefirst place.Interestingly, you also have data acquisition limits to deal with. No technologycurrently exists for grabbing thoughts from someone’s mind through telepathic means. Of course, other limits exist, too — most of which you probably alreadyknow about but may not have considered.

Finding Data Ubiquitous

More than a buzzword used by vendors to propose new ways to store data and analyze it, the big data revolution is an everyday reality and a driving force of our times. You may have heard big data mentioned in many specialized scientific and business publications and even wondered what the term really means. From a technical perspective, big data refers to large and complex amounts of computerdata, so large and intricate that applications can’t deal with the data by using additional storage or increasing computer power. Big data implies a revolution in data storage and manipulation. It affects what you can achieve with data in more qualitative terms (in addition to doing more, you
can perform tasks better). Computers store big data in different formats from a human perspective, but the computer sees data as a stream of ones and zeros (the core language of computers). You can view data as being one of two types, depending on how you produce and consume it. Some data has a clear structure (you know exactly what it contains and where to find every piece of data), whereas other data is unstructured (you have an idea of what it contains, but you don’t know exactly how it is arranged).
Typical examples of structured data are database tables, in which information is arranged into columns and each column contains a specific type of information. Data is often structured by design. You gather it selectively and record it in its correct place. For example, you might want to place a count of the number of people
buying a certain product in a specific column, in a specific table, in a specific database. As with a library, if you know what data you need, you can find it immediately. Unstructured data consists of images, videos, and sound recordings. You may usean unstructured form for text so that you can tag it with characteristics, such as size, date, or content type. Usually you don’t know exactly where data appears in an unstructured dataset because the data appears as sequences of ones and zerosthat an application must interpret or visualize.

Transforming unstructured data into a structured form can cost lots of time and effort and can involve the work of many people. Most of the data of the big data revolution is unstructured and stored as it is, unless someone renders it structured. This copious and sophisticated data store didn’t appear suddenly overnight. It took time to develop the technology to store this amount of data. In addition, it took time to spread the technology that generates and delivers data, namely computers, sensors, smart mobile phones, the Internet, and its World Wide Web services.

For more updates and information, don’t forgot to get in touch with us, Let us know your views about the content published in our platform, just check out the comment section below. Also, you may click here for more info on our categorized Education Domain.


Lattice Nepal is an online magazine for global readers. Get timely updates about world affairs, wonderful facts, and trustworthy news.

Leave a Reply

Your email address will not be published. Required fields are marked *