Wednesday, October 23, 2024

My 2024 Artificial Intelligence (AI) Quest

 My 2024 Artificial Intelligence (AI) Quest

With forty years in computer technology, I’ve witnessed a series of paradigm shifts, a fundamental change in the basic concepts and practices of data processing and information management. Some were more dramatic that others;

Punched card, data-storage-media processing, and the migration from unit-record to programmable computers, was the predominate computer technology of the sixties.

Soon, came magnetic data storage media, on computers, like tape and magnetic/optical disk.

Then came, data on-line communication and wired, and wireless networks.

I never though Personal Computers would usher-in that huge information management PC-network and paradigm shift.

Broadband Internet, Cloud Computing, and that family of storage exploded the market.

Next, the Mobile shift put it all in our pocket phones.

Jump to now, and Artificial Intelligence has EXPLODED on the scene. It started in academic and corporate research labs and blasted into the commercial market.

So, over forty years, I’ve learned that the publicity outpaces the reality of a technology, because adaptation takes time, BUT AI is moving much faster than expected. Computer professionals would sometimes delay implementing new technologies, letting others suffer the bugs, but with AI, this is not an option for users. This paradigm shift in AI computer technology is the most dramatic in my experience. Every aspect of AI is new and very complex, with concepts, functions, features, terminology, hardware, software, and firmware, all a new-fangled environment. Initially, my AI research was very frustrating; the more I studied, the more confusing it became. Over time, I began to make progress and now it’s time for some formal education.

The seven LinkedIn-example stages of AI

1. Rule-Based AI Systems

Description: In this stage, AI systems follow pre-set rules defined by human programmers. These systems are limited to specific tasks without the ability to adapt or learn.

Example: Early expert systems like MYCIN, developed in the 1970s to diagnose bacterial infections, followed a strict set of medical rules.

Another Example: TurboTax is a modern example, helping users file taxes based on rule-based logic but lacking adaptability or learning capabilities.

2. Context Awareness and Retention Systems

Description: AI at this stage can understand and retain context from past interactions to influence future decisions, offering a more personalized user experience.

Example: Amazon Alexa or Google Assistant can remember user preferences, adjusting responses based on context. For instance, Alexa might remind you of a shopping list item mentioned earlier or remember your home location for better suggestions.

Another Example: Tesla Autopilot uses memory from past driving experiences to improve performance over time, like identifying frequent routes.

3. Domain-Specific Mastery Systems

Description: These AI systems specialize in a single domain and perform at expert levels, far surpassing human capabilities in that specific field.

Example: IBM’s Watson mastered trivia to win Jeopardy!, using natural language processing to understand and answer questions.

Another Example: DeepMind’s AlphaGo became a domain-specific master by defeating world champion Go players, learning complex strategies via deep learning.

4. Thinking and Reasoning AI Systems

Description: AI systems here can simulate human thought processes, reasoning through complex tasks, solving novel problems, and making decisions without direct programming.

5. Artificial General Intelligence (AGI)

Description: AGI would be capable of performing any intellectual task that a human can. It would understand, learn, and apply knowledge across a wide array of tasks.

Example: Although AGI is still theoretical, platforms like DeepMind’s Gato aim to move closer to this stage, as it can perform over 600 different tasks, ranging from robotics to image recognition and text generation.

6. Artificial Superintelligence (ASI)

Description: ASI represents a system with cognitive abilities far beyond human intelligence. It could solve global-scale problems, from climate change to disease, with unimaginable speed and precision.

Example: This stage is still speculative, but imagine AI models like DeepMind’s AI for scientific discovery, potentially solving complex problems in physics or medicine.

Another Example: The concept of ASI often relates to future AI like GPT-5 or beyond, where machines could autonomously innovate in ways humans may not be able to foresee.

7. AI Singularity

Description: The AI Singularity refers to a hypothetical future point where AI growth becomes uncontrollable and irreversible, fundamentally transforming civilization. This point would likely be driven by an ASI that could improve its own intelligence beyond human comprehension.

Example: This stage is highly speculative, but Ray Kurzweil’s predictions for the 2040s involve AI surpassing human intelligence, leading to profound changes in society, economics, and even human biology. This sounds like; “why would you do this?” but basic economics say; “humans always prefer more of a good over of a less good”, so the good will outweigh the bad!

The path of AI from rule-based systems to superintelligence showcases the field’s dynamic nature, full of both immense promise and complex ethical considerations.

The seven stages of AI are all experiencing tremendous research, development, and advancement. For sure, the first three stages are in extensive use today. Generative AI and ChatGPT are producing text, imagery, audio, video, and data for us.

Large Language Models use massive data to generate accurate and real responses to our verbal prompts.

The BIG stage 7 is providing a method where us humans can connect our minds, sort of like computers on the Internet, and instantly, we be (selectively)sharing our thinking, without our five senses, on the People-Network. Neuralink, the brain-computer interface and neuroprosthetics company, started by Elon Musk and associates, is developing ultra-high bandwidth brain-machine interfaces to connect humans and computers.

My main interest, at this time, is to understand and master the AI benefits/risks to senior citizens. Currently, we seniors are affected by the technology but really have no voice in the technology. No doubt, AI is the biggest computer technology paradigm shift, ever, so far. Experts say;

“The development of full artificial intelligence could spell the end of the human race.” – Stephen Hawking

“AI is likely to either be the best or worst thing to happen to humanity.” – Elon Musk

“I am in the camp that is concerned about superintelligence. But I don’t think we need to be fatalistic about it.” – Bill Gates

“AI will be part of every industry, enhancing our abilities in ways we can’t even imagine yet.” – Jeff Bezos

“All AI things considered, Critical Thinking is now a prerequisite for EVERYONE, because of TRUE/FALSE STUFF in Social Networks and Artificial Intelligence.” -- ME SAY

Critical thinking is the analysis of available facts, evidence, observations, and arguments in order to form a judgement, by the application of rational, skeptical, and unbiased analyses and evaluation. Nothing new, but not real extant in people!

So, AI is now business-driven by profit-motivation and we don’t know or didn’t vote for, and we have no control over it. At least, we should seek out the best information, using our steaming services, documentaries, publications and any reliable information that we can find, so that we know what the experts are arguing about. We need AI that AI Should Augment Human Intelligence, NOT Replace It!

The majority of the current AI users that I talk to, like it a lot. My personal experience is also very favorable and my main concentration is representing senior citizens in how AI will affect us.

After a summer-long, world-wide search/research quest, my best Introduction Course is found at Oxford University, UK. I’m taking this course, online, this winter. The Oxford Department for Continuing Education has provided the syllabus, or outline for the course. In advance preparation, I’ve expanded the bulleted topics with my personal research, in an attempt to get a leg-up on what the course will teach. My search/research is based on the most reliable and expert-based information and my notes are extracted from this information. Sharing this expanded-outline, is my attempt in sharing what fundamentally, we all need to know and consider, in simple, unassuming language, as we enter the AI world. Yes, even seniors need to, at least, read the overview, because you’re going to be a USER!

artificial intelligence, n.

The capacity of computers or other machines to exhibit or simulate intelligent behavior; the field of study concerned with this. Source: Oxford English Dictionary

Artificial Intelligence (AI) has become ingrained in the fabric of our society, often in seamless and pervasive ways that may escape our attention day-to-day. The ability of machines to sense, process information, make decisions and learn from experience is a transformative tool for organizations, from governments to big business. However, these technologies pose challenges including social and ethical dilemmas.

·       This course provides an essential introduction to the key topics underpinning AI, including its historical development, theoretical foundations, basic architecture, modern applications, and ethical implications. The course investigates the future trajectory of AI and considers its potential for improving the world while highlighting pitfalls and limitations. It is aimed at a general audience, including professionals whose work brings them into contact with AI, and those with no prior knowledge of AI. The course aims to confer an appreciation of the ways in which our world has already been transformed by AI, to explain the fundamental concepts and workings of AI, and to equip us with a better understanding of how AI will shape our society, so we can converse fluently in the language of the future. In preparation for the course, and Guided by the syllabus/outline, I’ve added my research findings (+) under the bullets (.) of the outline (below).

University of Oxford

Department for Continuing Education

Introduction to Artificial Intelligence (Syllabus)

https://conted.ox.ac.uk/courses/introduction-to-artificial-intelligence-online

onlinecourses@conted.ox.ac.uk

Unit 1: What is intelligence?

  • The concept of intelligence

+The ability to acquire and apply knowledge and skills. Our family units have always taught and guided our offspring to learn and improve on intelligence, making our world better.

  • What is artificial intelligence?

+   Artificial intelligence (AI) is the ability of a computer or robot to perform tasks commonly associated with intelligent beings. Most of us don’t realize how much we have already accepted, adopted and use these tools. We need to realize that the rapid advancements in artificial intelligence (AI) and machine learning (ML) technologies have led to significant societal implications. These technological innovations (both promising and uncertain) have the potential (controlling and dangerous) to revolutionize various aspects of society, including the future of learning, social impact, and the nature of work.

  • Weak vs Strong AI

+ Strong or Artificial General Intelligence (AGI) AI refers to a hypothetical machine that exhibits human cognitive abilities. It can tackle diverse problems and develop new approaches to solve the task. Strong AI aims to create machines with human-like cognitive abilities, self-awareness, and adaptability.

+ Weak or Narrow AI (being Rules-Based) refers to the use of advanced algorithms to accomplish specific problem solving or reasoning tasks that do not encompass the full range of human cognitive abilities. Rule-based AI systems operate on a set of predefined rules created by human experts. These rules dictate the system's behavior in response to specific inputs. (Alexa, Chatbot, Amazon, Spotify, Your self-driving car). Weak AI (our current model) can outperform humans on the specific tasks it is designed for, but it operates under far more constraints than even the most basic human intelligence.

  •  A Brief History of AI

+The field of AI wasn't formally founded until 1956, at a conference at Dartmouth College, in Hanover, New Hampshire, where the term " artificial intelligence " was coined. As of 2024, AI is surpassing human performance on numerous benchmarks, including simulate conversation with human users, image classification, visual reasoning, and English understanding thru text or voice. My personal experience with Genie Chatbot polishes my writing, and vastly improves my communication and productivity skills. We all use other Chatbots, for searches, translation, shopping, and a wide variety of helpful tasks.   

  • The Golden Age of AI

+The term “golden age of AI” is often used to describe the current era of rapid advancements and widespread adoption of artificial intelligence technologies (Major computer technology paradigm shift). This period is marked by significant breakthroughs in areas like generative AI, machine learning, and natural language processing. Tools like ChatGPT have captured the public’s imagination and are transforming personal and corporate computing. One of the most compelling reasons to study AI is to learn the ethical implications that come with advancing technologies.

·         Applications of AI

+Artificial Intelligence (AI) has a wide range of applications across various business sectors. (Medical, Education, Government, Commercial, Manufacturing, Industrial, Military, Aerospace, and …..)

Unit 2: Artificial intelligence and society

  • Data governance

+Data governance is critical in quality assurance for ethical AI by establishing frameworks and guidelines that govern the collection, storage, usage and sharing of data. It ensures that the data used to train AI models is accurate, reliable and helps mitigate bias. (Like other (new) computer technologies, self-governing will be an issue.)

  • AI and Equality

+AI has the potential to increase inequality. As AI is increasingly applied to make consequential decisions that affect social, political, and economic rights, it is imperative that we ensure these systems are built and applied in ways that uphold principles of fairness, accountability, and transparency. (This is why Seniors need a voice, like other business sectors). Also, young users can be led like sheep, because commonly, their minds are not developed to discriminate about information. Plus, many adult users do not possess critical thinking skills and are lead by deceptive media.

  • AI and employment

+Overall, while AI poses challenges, it also offers opportunities for innovation and growth in the job market. For example: We have a manufacturing plant of 1600 employees that AI has reduced to 400 employees. Now, the staff is increasing with new tech jobs. AI will create more jobs, so the candidates must (as usual) qualify themselves, with the necessary skills.

  • Economic opportunities of AI

+AI is already affecting how economies grow, produce jobs, and trade internationally.  The myriad ways globalization impacts our lives are connected to AI and we tend to blame our President and government for economic pain, when in fact, the answer is very complex. Critical thinking is a new requirement for us all to learn, UNDERSTAND, and adopt the inevitable changes and new technologies.   

  • Risks of AI

+AI has the potential to revolutionize various fields, but also poses serious threats to society and humanity. (Lots known and unknown here). Some people feel that they will fall behind as AI becomes more prevalent. So, is AI something we should be scared of? The fears of AI seem to stem from a few common causes: general anxiety about machine intelligence, the fear of mass unemployment, concerns about super-intelligence, putting the power of AI into the wrong people’s hands, and general concern and caution when it comes to new technology. Artificial intelligence algorithms will soon reach a point of rapid self-improvement that threatens our ability to control them and poses great potential risk to humanity, and many AI experts are very concerned that it could be catastrophic. We’re using it and liking it but it will usher in dramatic change, is it going (progressing) too fast? Do you think that AI could control us, like we’re just robots. Certainly, our cooperate oligopolies are going to use AI to increase profit and simultaneously raise prices even more. Then, there’s the China vs USA situation, where China is winning the computer technology race. China is intent on dominating and monopolizing Quantum Computing and Artificial Intelligence. America has created the Special Competitive Studies Project to address and strengthen America’s long-term competitiveness. I can’t even imagine how all this will play out. Will we be able to validate and check our personal data that AI will capture/generate. In a recent meeting with young computer technicians, I just discovered that they’re concerned about the computer tech jobs that AI is displacing. This whole technology this is huge and vey complex, growing and changing every day, so we must be aware of how it will affect (physiological economic, political) us senior citizens. 

+ Automation-spurred job loss (A fact, and coming at the worst time in a divided nation). Numerous observers believe recent developments in robotics and AI may cause an unprecedented wave of automation-related job losses. The answer is get prepared for AI and develop those AI skills, because it’s a new, unavoidable field.

+ Deepfakes

Deepfakes use AI to replace the likeness of one person with another in image, video or audio. Now, AI can make anyone into any image, and vice versa, and celebrities (or anyone) have lost control of their likeness/image on social media etc.  Don’t take anything at “FACE-VALUE”, you MUST know how to spot deepfakes! You must learn critical-thinking skills and not believe anything (at face value) that you see!

+ Privacy violations

Artificial intelligence has been no different when seen through a privacy-by-design lens, as privacy has not been top-of-mind in the development of AI technologies. There is a high risk to individuals’ rights and freedoms in the AI processing of personal data, something quite different to the risk posed by data breaches, but also with very little “fallout” for the companies responsible. Some privacy challenges of AI include:

Data persistence – data existing longer than the human subjects that created it, driven by low data storage costs

Data repurposing – data being used beyond their originally imagined purpose

Data spillovers – data collected on people who are not the target of data collection

Just recently, the largest data spillover in history occurred, where billions of people (targets or non-targets of data collection) were data-scraped by National Public Data, a company most have never heard of and have done no business with. This background-check company, grabs data from every source they can and sells it, without anyone’s permission or authorization, and claims no responsibility for the security of the scraped-data (to me it’s stolen data). Its all about money;3 same is true for networks, no guarantees!

+ Algorithmic bias caused by bad data

Algorithmic bias is when processes commit systematic errors that unfairly favor or discriminate against certain groups of people. These AI biases are the result of poor training data and the biases of the humans who compiled the data and trained the algorithms. We’ll be trusting humans that have human flaws, to do this, so we better be able to critically think, as users.

+ Socioeconomic inequality

Most empirical studies find that AI technology will not reduce overall employment. However, it is likely to reduce the relative amount of income going to low-skilled labor, which will increase inequality across society. Nothing new for technology, because change is inevitable and our skills must change. Now, personally, AI is progressing at the worst time ever, because social media has given us all a voice, and the social divide will intensify. Ideally, we need good employment opportunities for everyone, depending on their education and skills. This is a big order!

+Market volatility

AI is being increasingly used to analyze and predict market volatility. Techniques such as artificial neural networks and machine learning algorithms have shown promise in accurately forecasting stock prices and identifying changes in all market trends. Yes, it’s loaded with good and bad! The good is how AI shows promise in taming market volatility. The Bad is the regular Bad, plus unknown, future developments.

+ Weapons automatization

AI weapons automation refers to the use of artificial intelligence in military applications. Here are some key points about this topic:

Current AI-enabled weaponry is not yet fully autonomous, but the technology exists.

Advances in AI empower autonomous weapons and platforms to carry out more sophisticated behaviors and activities. AI can be used for analyzing the battlefield, providing augmented reality information to soldiers, and identifying threats.

Deployment of AI-controlled drones that can make autonomous decisions about killing human targets is being developed by countries, including the US, China, and Israel.

With Russia’s invasion of Ukraine as the backdrop, the United Nations recently held a meeting to discuss the use of autonomous weapons systems, commonly referred to as killer robots.

  • Uncontrollable self-aware AI

+Uncontrollable self-aware AI is a topic of concern. While it may sound like science fiction, there are already machines that perform tasks independently without programmers fully understanding how they learned it. A recent study suggests that it is virtually impossible to keep a artificial-super-intelligent (ASI) in or under control. Although there is no theoretical barrier for AI to reach self-awareness, the conclusion is that we currently cannot control it. (Totally baffled by this)

·        AI and accountability

+Accountability in AI refers to the expectation that organizations or individuals will ensure the proper functioning of the AI systems they design, develop, operate or deploy, in accordance with their roles and applicable regulatory frameworks. Providing accountability for trustworthy AI requires that actors leverage processes, indicators, standards, certification schemes, auditing, and other mechanisms to follow these steps at each phase of the AI system lifecycle. There is a growing concern about an "accountability gap" in AI, and this gap prevails through the entire history of computer technology. Think about your company; security and accountability costs money, and it’s probably all about the money, so nah, we don’t need that!

Unit 3: Systems and agents

  • Concept of an agent

+An AI agent is a software entity that uses artificial intelligence techniques to perceive its environment, make decisions, and take actions to achieve specific goals. AI agents can operate autonomously or semi-autonomously and are designed to solve problems, automate tasks, or provide services in various domains.

 

+ The Key Characteristics of AI Agents:

+1. **Perception**: AI agents can gather information from their environment through sensors or data inputs. This could include anything from visual data from cameras to numerical data from sensors.

+2. **Decision Making**: Based on the information they perceive, AI agents use algorithms and models to analyze data, evaluate options, and make decisions. This may involve machine learning, rule-based systems, or optimization techniques.

+3. **Action**: Once a decision is made, the AI agent takes action to affect its environment or achieve its objectives. This could involve sending commands to other systems, generating responses, or interacting with users.

+4. **Autonomy**: Many AI agents are designed to operate with a degree of independence, meaning they can perform tasks and make decisions without human intervention.

+5. **Learning**: Some AI agents are capable of learning from their experiences, allowing them to improve their performance over time. This can involve adjusting their strategies based on feedback or new data.

+The Types of AI Agents:

+- **Reactive Agents**: These agents respond to specific stimuli from the environment but do not have memory or learning capabilities.

+- **Deliberative Agents**: These agents maintain an internal model of the world and can plan actions based on that model.

+- **Learning Agents**: These agents can learn from their experiences and improve their performance over time, often using machine learning techniques.

+- **Multi-Agent Systems**: In some applications, multiple AI agents can work together or compete with each other to achieve individual or collective goals.

  • Applications:

+AI agents are used in various fields, including:

+- **Customer Support**: Chatbots and virtual assistants that handle customer inquiries.

+- **Robotics**: Autonomous robots that can navigate and perform tasks in real-world environments.

+- **Gaming**: Non-player characters (NPCs) that interact with players and adapt to their strategies.

+- **Recommendation Systems**: Agents that suggest products or content based on user preferences and behaviors.

+Overall, AI agents are a fundamental part of the broader landscape of artificial intelligence, enabling more intelligent, responsive, and autonomous systems across various applications.

+Structure of an agent

To understand the structure of Intelligent Agents, we should be familiar with Architecture and Agent programs. Architecture is the machinery that the agent executes on. It is a device with sensors and actuators, for example, a robotic car, a camera, and a PC. An agent program is an implementation of an agent function. An agent function is a map from the percept sequence (history of all that an agent has perceived to date) to an action.

+Rationality of an agent

To be considered a rational agent, an AI agent must select actions that maximize its performance measure for all possible percept sequences. Rationality goes beyond simply being an agent; it focuses on achieving the desired outcomes given the available information and prior knowledge.

+Perfect agents

A perfect AI agent possesses the following characteristics:

Autonomy: It can act independently without continual human intervention.

Learning and Adaptation: It can learn from data and adapt its behavior over time.

Interaction: It can meaningfully interact with other agents or systems.

Perception: It has sensors or mechanisms to observe and perceive its environment.

Reasoning and Decision-Making: It can process information, reason about goals, and make decisions to achieve those goals.

+Task environments

In the context of AI, the task environment refers to the surroundings or circumstances in which an AI system functions. It includes the physical environment, digital platforms, and virtualized worlds where AI models and algorithms are used. The task environment can be classified based on various factors such as observability, agents, determinism, episodic nature, and continuity. Back to the computer part, it all about the hardware, firmware, software, and data.

+Designing agents

An artificial intelligence (AI) Designing agent refers to a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools.

+Simple reflex agent

Simple reflex agents are fundamental constructs in artificial intelligence (AI) that perform on a simple precept: they make choices primarily and completely based on immediate environmental stimuli and then take the appropriate action. This is like our home thermostat, health-monitoring devices, automobile sensors, and a plethora of constantly-growing list of other AI stuff. This is the simple reflex agent we’re all readily adopting, but wait, these agents could do a lot more! Good or bad?

+Model-based reflex agents

A model-based reflex agent is an agent that uses a remembered history of perceptions to choose actions and form a more comprehensive view of their environments. Unlike simple reflex agents, model-based reflex agents are model-based, which means they have knowledge of how things happen in the world. They take into account both the current percept and an internal state representing the unobservable aspects of the environment. They update their internal state based on how the world evolves independently and how their actions affect the world

+Goal-based agents

Goal-based agents are AI systems designed to achieve specific objectives or goals. Unlike simple reflex agents that act solely based on current perceptions, goal-based agents consider future consequences of their actions, ensuring that they align with the set objectives. Given a plan, a goal-based agent attempts to choose the best strategy to achieve it based on the environment. WE also have readily adopted this one, in natural language processing. Yes, it’s where you talk to Alexis, Google, Genie and you-name-it, there’re everywhere.

+Utility-based agents

The agents which are developed having their end uses as building blocks are called utility-based agents. When there are multiple possible alternatives, then to decide which one is best, utility-based agents are used. They choose actions based on a preference (utility) for each state. Sometimes achieving the desired goal is not enough. We may look for a quicker, safer, cheaper trip to reach a destination. Agent happiness should be taken into consideration. Utility describes how “happy” the agent is. Because of the uncertainty in the world, a utility agent chooses the action that maximizes the expected utility. A utility function maps a state onto a real number which describes the associated degree of happiness.

Now folks, by now, you begin to see that AI, like our previous computer technology paradigm shifts, is way more than anything we’ve ever experienced and it is incumbent on us to be informed on what’s behind the tools we choose/use.  Critical thinking, it’s REQUIRED everywhere!!!!  

Unit 4: Logic and language

  • Early ideas: Logic and language

+Alan Turing, at a time when computing power was still largely reliant on human brains, the British mathematician Alan Turing imagined a machine capable of advancing far past its original programming. To Turing, a computing machine would initially be coded to work according to that program and surpass human capability.

The “Evolution of Artificial Intelligence” by Joshua Cena, of the University of Manchester, delves into the rich history and evolutionary journey of artificial intelligence (AI), tracing its origins from early conceptualizations to its current applications in various domains. Through a comprehensive review of key milestones, breakthroughs, and influential figures, the document highlights the pivotal moments that have shaped the development of AI over time. It explores how AI has transitioned from theoretical frameworks and symbolic reasoning approaches to the era of machine learning, deep learning, and neural networks, leading to transformative advancements in areas such as robotics, natural language processing, computer vision, healthcare, and other autonomous systems.

  • Imitating mathematical intelligence

+AI has made significant strides in imitating mathematical intelligence, particularly in solving complex problems that require advanced reasoning skills and computational power. These advancements are not just about solving math problems for competitions, this progress indicates that AI is getting closer to human-like reasoning abilities, which could lead to more powerful AI tools for scientific research and education

  • Propositional logic

+Propositional logic is a fundamental building block in AI, serving as the language in which we express knowledge and information in a structured manner. This system allows us to represent the world's knowledge, facts, and relationships using simple, atomic propositions, and logical operators like "AND," "OR," and "NOT." Propositional logic is based on propositions, binary statements about the world, that can be either true or false.

  • Designing mathematical languages

+There are three types of theory language used in designing AI products: formal, computational, and natural.

1.Formal languages, such as mathematics, logic, and programming languages, have fixed meanings and no actual-world semantics.

2.Computational Languages: These languages refer to real-world entities, events, and thoughts. They have actual-world references and semantics, making them context-sensitive. Computational languages are used to model and simulate real-world phenomena within AI systems.

3. Unlike formal and computational languages, natural languages (like talking English, French, Spanish, Japanese, ect.) are dynamic, creative, and productive. They can refer to an unlimited number of objects and their attributes across various domains. Natural languages are often used in AI for tasks involving human-computer interaction, such as natural language processing (NLP). We’re all currently doing this, much more than we realize.

  • Gödel's incompleteness

+Gödel's incompleteness theorems are two theorems of mathematical logic that are concerned with the limits of provability, in formal axiomatic theories. Not sure, but it might be yeas, no, or maybe. This is currently over my head!

  • Solving mathematical problems with AI

+ Need more info. This may be about how AI agents can work cooperatively to pass along problems to another agent, whose environment can best process the meet the requirements of the query.

  • Halting problem

+The Halting Problem, a fundamental concept in computer science and artificial intelligence, poses intriguing questions about the limits of computation. It delves into the feasibility of determining whether a program will eventually halt or continue to run indefinitely. (Programmers defined this as; hung in a loop) Logic is limited by the binary bit pattern of the program. Example; an eight-bit byte is 2^8=256 options A word is a group of bytes. Therefore, a word can be powers of 2 for 16 bits, 24 bits, 32 bits, and so on.

  •  

Unit 5: Expert systems

  • What are expert systems?

+An expert system in artificial intelligence (AI) is a computer system that emulates the decision-making ability of a human expert. It is designed to solve complex problems by reasoning through bodies of knowledge, (neural networks) represented mainly as if–then-else RULES rather than through conventional procedural code

  • Representing knowledge

+Knowledge representation is the method by which information is formalized for AI systems to use. It encompasses a variety of techniques designed to represent facts, concepts, and relationships within a domain, allowing machines to process and utilize this information effectively.

Primary goals of knowledge representation;

+Expressiveness: The ability to represent a wide variety of knowledge.

Efficiency: The capability to manipulate and reason with knowledge quickly.

Types of AI knowledge;

Declarative Knowledge: Facts and information about objects, events, and their relationships. For example, “Paris is the capital of France.”

Procedural Knowledge: Knowledge of how to perform tasks. For example, “How to ride a bicycle.”

Meta-Knowledge: Knowledge about another knowledge. For example, “The reliability of a source.”

Heuristic Knowledge: Rules of thumb or best practices. For example, “If the weather is cloudy, it might rain.”

This gets way deep, so I’m stopping here, for now!

Understandability: The ease with which humans can comprehend the represented knowledge.

Scalability: The ability to handle increasing amounts of knowledge without significant performance degradation.

  • Reasoning with logic

+Reasoning in AI refers to deriving new information from existing information using logical rules and principles. AI systems use Reasoning to make inferences, draw conclusions, and solve problems. Automated reasoning lies at the core of artificial intelligence, where the focus is on crafting systems that can independently navigate the realm of logical deductions and inferences. It can be thought of as giving machines the ability to think logically. Artificial neural networks are composed of layers of nodes

Each node is designed to behave similarly to a neuron in the brain

The first layer of a neural net is called the input layer, followed by hidden layers, then finally the output layer

Each node in the neural net performs some sort of calculation, which is passed on to other nodes deeper in the neural net.

Deeper we go, we find Artificial Neural Network (ANN) and Biological Neural Network (BNN), which is another Data Science Course!

  • Backward chaining

+Backward chaining is an inference method where an AI system starts with a goal or desired outcome and works backward through a series of rules and conditions to find the necessary steps or conditions to achieve that goal. It’s like solving a puzzle in reverse, beginning with the solution and tracing back to the initial conditions. This reminds me of a GIS expert that I worked with. He said, “play the movie backwards”, to solve/execute your planning, organizing, management, and development of a data system. Being IMB-trained, I learned top-down, hierarchy, in business systems. Backward-chaining is, therefore, a very difficult concept for me!

  • Advantages and disadvantages of expert systems

+ This system is heavily dependent on a good system base. Experts keep updating the information in the knowledge base, and non-expert makes use of this information for complex problem-solving. OK. Think about your work experience, some folks are just not very dependable for keeping a good system base. It really boils down to “can you trust this information”? So, just like the Internet users, people must have (learn) good critical-thinking skills.

Unit 6: Connectionist models

  • Biological neural network

+A biological neural network (BNN) is a physical structure found in brains and complex nervous systems, consisting of interconnected neurons connected by synapses. These networks allow for communication and information processing within the nervous system. Neurons are connected by axons and dendrites, and neurotransmitters are released at synapses to excite or inhibit adjacent neurons. Scientists have fused brain-like tissue with electronics to make an ‘organoid neural network’ that can recognize voices and solve a complex mathematical problem. Their invention extends neuromorphic computing – the practice of modelling computers after the human brain – to a new level by directly including brain tissue in a computer. This seems way out-yonder, but it’s on the AI plan!

  • ANNs in action

+An artificial neuron network (ANN) is a computing system patterned after the operation of neurons in the human brain. The layered ANN, inspired by the BNN is the foundation of the AI system, with an Input layer, hidden layers, and an output layer. Entering the Input layer, a query will be resolved(answered), and the exit the Output layer. All the AI stuff WE do is facilitated by specialized algorithms (look up classes of algorithms) which pass through interconnected (one-byte or 8-bit) nodes of the neural network layers, from the input layer, through the hidden layers, and finally, to the output layer, with our answer.

  • Building blocks of a neural network

+ The logic gates and electronic circuits of the ANN layers, which support the artificial intelligence process.

  • An example of a neural network (We’re using these without any real concern or understanding.)

+ Google’s search algorithm/Chatbot

Computer vision for image and video processing

Speech recognition for understanding natural language requests

Natural language processing (NLP) for language understanding

  • Backpropagation

+ Backpropagation (in finding errors and correcting) is a widely used method for calculating derivatives inside deep feedforward (one-way) neural networks. Backpropagation forms an important part of a number of supervised learning algorithms for training feedforward neural networks, such as stochastic gradient descent. OK, this is deep stuff, but, basically, experts have to train the ANN to straighten-out it’s mistakes, like that just didn’t come out right! Yes, an expert has to tune the ANN and correct it. I just can’t imagine a job like this! Well, now we're going to look now at a massive workforce of humans whose jobs were created by/for AI. They are the foot soldiers training the algorithms. Their job title is sometimes called being an annotator or a tasker. An AI annotator's role is to systematically review and label different data types, translating human language and inputs into machine-understandable formats. A Human-AI-Tasker typically refers to a system or framework where humans (using AI Tasker Tools) and artificial intelligence collaborate to complete tasks. This concept is often part of Human-Centered AI or Humans-in-the-Loop systems, which emphasize the importance of human involvement in AI processes. I know it, but don’t understand the mechanics of it! These are some of the new work-anywhere jobs of AI, for qualified personnel. You have to work smart and meet standards, but you can be paid electronically, daily $600 or more.

  • Architectures and training

+ An AI Architect is a specialized professional responsible for designing and overseeing the implementation of AI solutions. Highly skilled, AI architects envision, build, deploy and operationalize an end-to-end machine learning (ML) and AI pipeline. Now wait, envision who this might be in your company! Watch out! Reality is, most officers, executives, administrators, and managers, want AI, but history says, these folks throw it in the closet, to the IT folks, and the customer service, business risks, and security, somehow fall through the cracks. Yes, the company’s employees and customers pay the price. Have you ever been the victim of a data breach? This one is way worse!

Unit 7: Artificial intelligence in the 21st century

  • Arriving at the current state of AI

+In the 24th year of the 21st century, we now have AI, a computer technology paradigm shift, that we must embrace and learn/employ our critical thinking SKILLS. The experts agree that; AI development has flipped over the last decade from academia-led to industry-led, by a large margin, and this shows no sign of changing. Most companies can’t afford to develop it, so they buy it from consulting companies. Consider Voice over IP, and telephone answering system, the automated, digital corporate-telephone system. It asks the customer to press a number or say “yes/no” to match your question, with a VoiceIP answer or digital agent. To me this can be frustrating, and if you get ugly, it hangs up on you. We often wonder if the company management ever actually reviews their answering system. My general impression is that this is NOT customer service, it’s customer aggravation and you’ll be hard-pressed to find a human to complain too, because you’ll be holding for hours for a real-person. What about monitoring and validating YOUR personal information that AI is using?  AI is different but, Will AI be better or more frustrating?

  • Big Data

+ Big data and AI are related but distinct concepts. Big data is a collection of unstructured information from (wherever) sources, while AI is a process of analyzing and learning from data. Big data is the fuel that powers AI, providing it with the information necessary for developing and improving features and pattern recognition capabilities. (data is information and big information makes answers for you) AI, in turn, delivers actionable insights (answers for you) from big data. Big data can come from the Internet, publicly available sources, or it can be proprietary. I’m totally not clear about how this is managed!

  • Big Data and the Internet

+ Big data is produced from multiple data sources like mobile apps, social media, emails, transactions or Internet of Things (IoT) sensors, resulting in a continuous stream of varied digital material. Not sure about the personal risks to private accounts, subscriptions, or any personal stuff that we have. The Internet for sure is open season! For years, I told users, students, and management that the Internet is the Wild West; “connected”, you’re saying “here I am, try me”, and what you put out there is fair-game for anyone, anywhere, for any reason, and it can stay there forever! Great security is a tool but not a deterrence. Companies are now experiencing that legislation or controls of Internet content are now working.

  • AI and healthcare

+ This is a Big One for seniors; Artificial intelligence in healthcare refers to the use of machine-learning algorithms and software to mimic human cognition in the analysis, presentation, and comprehension of complex medical and health care data. It involves the use of machine learning, natural language processing, deep learning, and other AI-enabled tools to assist and improve the patient experience, including diagnosis, records, treatment, and outcomes. AI can help manage and analyze data, make decisions, and conduct conversations, so it is destined to drastically change clinicians’ roles and everyday practices. So many wearable devices and personal monitors are already being incorporated into this technology. This will be a big advantage to medical doctors and healthcare professionals, and it WILL be the patients responsibility to monitor their personal data.

  • AI and automobiles

+We all know about this one and we accept it all, usually admitting that our cars are smarter than us drivers. AI in vehicles has pros and cons. We would all probably come up with the obvious but there’s more critical thinking needed.

  • Cybersecurity

+ AI and cybersecurity:

AI is used to automate repetitive tasks for security analysts.

It helps identify shadow data and monitor data access.

AI can anticipate cyberattacks and enable faster response.

Cybersecurity is essential for the safety and reliability of AI systems.

How does this help against data breach? Security tools are fantastic but they must be judiciously managed, a real challenge for Information Management (Computer people). Most of the big installations have a qualified data security manager.

  • Machine translation

+ AI machine translation:

Uses AI to automatically translate text and speech from one language to another.

Relies on natural language processing and deep learning.

Aims to preserve the meaning, context, and tone of the original content.

I use Google Translate and Genie AI Chatbot, and look forward to more functions and features.

  • Ethics of AI

+ The ethics of artificial intelligence (AI) is a critical field that addresses the moral and responsible development and use of AI technology.

UNESCO’s Recommendation on AI Ethics: In 2021, UNESCO produced the first-ever global standard on AI ethics, known as the “Recommendation on the Ethics of Artificial Intelligence.” This framework emphasizes four core values: respect for human rights and dignity, promotion of diversity and inclusiveness, protection of the environment and ecosystems, and ensuring transparency and accountability.

We all want this, so given the history of computer technology and the Internet, this will be a “bear” to achieve, and a never-ending struggle to maintain. Just saying! My critical thinking says be really careful with your personal data AND who you trust. 

Unit 8: Data science and artificial intelligence

  • What is data science?

+Data Science is a process of collection and analysis of data. The data can come from anywhere and anything. Every company and every computer user has data files and the owner is responsible for protecting their data. Artificial Intelligence (AI) involves the process of learning, reasoning, and self-correction from the data. AI is limited to the implementation of machine learning algorithms, whereas Data Science includes a broad range of statistical methods.

  • Data science processes

+The data science process is a structured approach to solving data-related problems. Data science processes are a set of steps followed by data scientists as they collect, analyze, model, and visualize large volumes of data. The process covers everything from data collection to presenting visualized data and insights to the business stakeholders.

Here are the key steps involved:

1.Problem Definition: Clearly define the problem and identify the goal of the analysis.

2.Data Collection: Gather data from various sources. This can involve surveys, web scraping, or accessing databases. This one is vague and suspicious but I really don’t understand it all.

3.Data Cleaning: Clean the data to remove duplicates, handle missing values, and correct inconsistencies. This step ensures the data is ready for analysis.

4.Exploratory Data Analysis (EDA): Analyze the data to uncover patterns, relationships, and insights. This helps in understanding the data better and guides the modeling process.

5.Model Building: Use machine learning algorithms and statistical models to build predictive models. This step involves selecting the right model, training it, and evaluating its performance.

6.Model Deployment: Deploy the model in a real-world environment where it can be used to make predictions or provide insights. Monitoring the model’s performance is crucial to ensure it continues to work well.

7.Communication: Present the findings and insights to stakeholders in a clear and understandable manner. This often involves visualizations and reports.

Now, we can see that any data, anywhere can become the subject or object of AI systems. Recently, Tik Toc, a popular video platform, used primarily by young users, has become the topic of espionage risk. Kids, can’t understand the risk of what they post on platforms nor can they understand becoming victims of the platform’s sinister manipulation.  

  • Data exploration: An example

Data exploration is the initial step in data analysis where you dive into a dataset to get a feel for what it contains. It’s like detective work for your data, where you uncover its characteristics, patterns, and potential problems. Data exploration is an approach similar to initial data analysis, whereby a data analyst uses visual exploration to understand what is in a dataset and the characteristics of the data, rather than through traditional data management systems. These characteristics can include size or amount of data, completeness of the data, correctness of the data, possible relationships amongst data elements or files/tables in the data. My first experience, of scraping, outside of the private corporate data, was exploring into outside Geographic Information Systems (GIS), to find stuff we could use. At the time, most GIS systems used common software and data formats. It was kind of like; “if you could get to it, it was free for the taking.”

Data exploration is a hard-to-imagine AI job, but the tools are constantly improving.

  • AI methods in data science

+ AI methods are integral to data science, enhancing the ability to extract meaningful insights from large datasets. Here is/are some key AI methods used in data science:

1.Machine Learning: This involves algorithms that learn from data to make predictions or decisions without being explicitly programmed. Common techniques include regression, classification, clustering, and reinforcement learning.

2.Deep Learning: A subset of machine learning, deep learning uses neural networks with many layers (hence “deep”) to model complex patterns in data. It’s particularly effective in image and speech recognition. We’re seeing a lot of image stuff now!

3.Natural Language Processing (NLP): This method enables computers to understand, interpret, and generate human language. Applications include sentiment analysis, language translation, and chatbots.

4.Data Mining: This involves exploring large datasets to discover patterns and relationships. Techniques include association rule learning, anomaly detection, and sequence mining.

5.Predictive Analytics: Using historical data, predictive analytics employs statistical algorithms and machine learning techniques to forecast future outcomes. It’s widely used in finance, marketing, and healthcare.

6.Computer Vision: This field focuses on enabling machines to interpret and make decisions based on visual data from the world. Applications include facial recognition, object detection, and autonomous vehicles.

  • Autoencoders

+ An autoencoder is a type of neural network architecture designed to efficiently compress (encode) input data down to its essential features, then reconstruct (decode) the original input from this compressed representation. A very complex subject but simply takes raw data and converts it to fit your specific AI system.

  • Data imputation

+ Data imputation is the process of replacing missing or unavailable entries in a dataset with substituted values. This process is crucial for maintaining the integrity of data analysis, as incomplete data can lead to biased results and diminish the quality of the dataset. From the early, beginning Information Management Systems, we inserted default parameters to substitute for missing data and we created edit programs that users could employ to research and complete, update, or correct the data set, so this is a simple/primitive example.

Unit 9: Machine learning and artificial intelligence

  • What is machine learning? (Two key aspects of machine learning are big data(text,audio,images,video) and algorithms.)

+Machine learning is another branch of artificial intelligence (AI) and computer science. It focuses on the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy. Machine learning allows a computer program to learn and adapt to new data without human intervention. A complex algorithm or source code is built into a computer that allows for the machine to identify data, in a neural network, and build predictions around the data that it identifies.

  • Supervised learning

+Supervised learning, also known as supervised machine learning, is a subcategory of machine learning and artificial intelligence. It’s defined by its use of labeled data sets to train algorithms to classify data or predict outcomes accurately. Say what?.. Well, it’s like you ask AI (Input) and it works through the Input/Hidden/Output layers of as neural network, using a compilation match-stuff to answer your question (Output), AND, it can additionally find or learn new possible matches.

+The following are some of the common steps involved in supervised learning:

(Supervised learning involves a human “teacher” or “supervisor.” This is becoming a popular and lucrative new job description for work-from-anywhere and get paid daily.)

Gather labeled data

1.Divide the data into two sets: Training and Testing

2.Select an appropriate algorithm (AI is a whole bunch of algorithms)

3.On the training set, train the algorithm

4.Analyze the algorithm’s performance on the testing set

5.If necessary, fine-tune the model to improve performance

6.Make predictions on new, unlabeled data using the trained model

  • Unsupervised learning

+Unsupervised learning in AI refers to: (It doesn’t need anyone to supervise the model.)

A deep learning technique that identifies hidden patterns or clusters in raw, unlabeled data. A type of machine learning where the algorithm is given input data without explicit instructions on what to do with it. The training of a machine using information that is neither classified nor labeled, allowing the algorithm to act on that information without guidance. Creating a model that extracts patterns from unlabeled data, (picking and placing) without pre-existing labels.

Supervised vs Supervised Learning:  Supervised and Unsupervised learning are the two techniques of machine learning. But both the techniques are used in different scenarios and with different datasets.

- In supervised learning, the AI is trained using a labeled dataset, which means that each training example comes with the correct answer (label).

-The goal is to learn a mapping from inputs to outputs, so the AI can predict the label for new, unseen data.

- So, Imagine teaching a child to recognize fruits. You show them pictures of apples and bananas along with the labels "apple" and "banana." After enough examples, the child can identify a new fruit based on what they’ve learned.

- In unsupervised learning, the AI is trained using an unlabeled dataset, meaning there are no correct answers provided. The AI tries to learn patterns and structures from the data on its own.

-The goal is to find hidden patterns or groupings in the data.

- Think of a child sorting a pile of mixed fruits without knowing what they are. The child might group similar-looking fruits together, like all the round ones in one pile and the long ones in another, even if they don’t know the specific types.

Summary:

-Supervised Learning: Learns from labeled data to make predictions.

-Unsupervised Learning: Learns from unlabeled data to find patterns or groupings.

In short, supervised learning needs answers to learn, while unsupervised learning explores the data without any answers!

So, A supervised learning model learns to classify data or accurately predict unseen data based on labeled examples. In contrast, unsupervised learning aims to discover hidden patterns, groupings, and dependencies within unlabeled data and leverages it to predict outcomes. Unsupervised learning is used to convert unlabeled data to labeled data, based on the patterns in the unlabeled data. So that’s about as clear as mud, but it’s real.

  • Reinforcement Learning

+ Reinforcement learning is an autonomous, self-teaching system that essentially learns by trial and error. It performs actions with the aim of maximizing rewards, or in other words, it is learning by doing, in order to achieve the best outcomes.

Reinforcement learning differs from supervised learning in a way that in supervised learning the training data has the answer key with it so the model is trained with the correct answer itself, whereas in reinforcement learning, there is no answer but the reinforcement agent decides what to do to perform the given task. In the absence of a training dataset, it is bound to learn from its experience. Just more confusion but nice to know the general description!

Unit 10: Testing artificial intelligence systems

+I’m real concerned that planning, design, development, testing, documentation, training, support, and system documentation/maintenance were not more emphasized in this syllabus, because of its critical nature.

  • Why test AI systems?

+As AI continues to revolutionize industries, the role of testing becomes increasingly vital. By implementing robust testing strategies and embracing best practices, organizations can unleash the full potential of AI while ensuring its reliability, security, and ethical use. The journey into the era of AI is an exciting one, and with meticulous testing, we can navigate this frontier with confidence, unlocking unprecedented opportunities for innovation and advancement. Embrace the power of AI, test diligently, and pave the way for a future where intelligent systems redefine what’s possible.

Testing for AI systems comes with unique challenges, and requires specialized techniques:

1.The results of these AI-based systems are non-deterministic, i.e., they generate different results for the same input.

2.There is usually human bias in the training and testing data, which needs to be identified and eliminated during AI model testing.

3.AI performs best when given advanced input models.

4.AI is an intricate system, and even small defects are magnified significantly.

  • Software lifecycle costs

+The lifecycle costs of AI software can vary significantly depending on the complexity and scope of the project. Here’s a breakdown of the typical costs involved:

1.Requirement Analysis and Design: This initial phase involves extensive consultation and requirement analysis sessions to conceptualize the AI system’s functionalities and user interface design.

2.Development and Testing: This phase includes the actual coding, integration, and rigorous testing of the AI software.

3.Deployment and Maintenance: Once the AI software is developed, it needs to be deployed and maintained. This includes regular updates, bug fixes, and performance monitoring.

4.Operational Costs: These include the costs of running the AI software, such as cloud computing resources, data storage, and energy consumption. These costs can be substantial, especially for large-scale AI applications.

5.Training and Support: Training users and providing ongoing support is crucial for the successful adoption of AI software. This can involve additional costs depending on the level of support required.

6.Lifecycle Management: Managing the AI lifecycle involves continuous monitoring, updating, and optimizing the AI models to ensure they remain effective and relevant. This necessary function adds to the overall costs.

Most companies will purchase AI solutions from vendors. Have you ever worked where this was the case? It takes more than most companies want to do but they do it because it’s an economic/efficiency opportunity! Employees and customers suffer, first, from the poor planning and risk management decisions.

  • Increasing adoption of AI/ML

+ The adoption of artificial intelligence (AI) and machine learning (ML) is rapidly increasing across business sectors, driven by their potential to revolutionize industries and improve efficiency. Here are some key trends and impacts:

Technological Advancements:

The integration of big data and cloud computing with AI/ML is enabling more effective deployment and scalability of these technologies.

Impacts of AI/ML Adoption

1.Operational Efficiency:

AI/ML can automate routine tasks, allowing human workers to focus on more complex and creative activities. This leads to increased productivity and efficiency across various sectors.

2.Enhanced Decision-Making:

By analyzing large datasets, AI/ML can provide insights that help organizations make more informed decisions, leading to better outcomes and competitive advantages.

3.Security Improvements:

AI/ML technologies are being used to enhance cybersecurity measures, detect anomalies, and prevent potential threats in real-time34.

4.Economic Growth:

The widespread adoption of AI/ML is expected to contribute significantly to economic growth by creating new job opportunities and driving innovation.

Challenges and Considerations

1.Ethical and Privacy Concerns:

The use of AI/ML raises important ethical questions, particularly around data privacy and bias in algorithms. Ensuring transparency and fairness in AI/ML applications is crucial.

2.Skill Gaps:

There is a growing demand for skilled professionals who can develop and manage AI/ML systems. Addressing this skill gap through education and training is essential for continued growth.

3.Regulatory Frameworks:

Developing robust regulatory frameworks to govern the use of AI/ML is necessary to ensure these technologies are used responsibly and ethically. The increasing adoption of AI/ML is transforming industries and driving innovation, but it also requires careful consideration of ethical, privacy, and regulatory issues to maximize its benefits.

  • Uncertainty and Oracles

+ Uncertainty in AI refers to the inherent unpredictability and ambiguity in data and decision-making processes. This can arise from various sources, such as noisy or incomplete data, model limitations, and the complexity of real-world environments. Managing this uncertainty is crucial for developing robust AI systems.

+Oracles in AI are theoretical entities or mechanisms used to provide correct answers or guidance during the training and evaluation of AI models. They help in validating the performance of AI systems by offering a benchmark or ground truth against which the AI’s predictions can be compared

  • Ethical dilemmas

+ Ethical dilemmas in AI include:

1.Bias: AI algorithms and training data may contain biases.

2.Data privacy and protection: Concerns about privacy and surveillance.

3.Decision accountability: The role of human judgment in AI.

4.Environmental impact: Considerations related to AI's carbon footprint.

5.Effects on the workforce: Unemployment and income inequality due to automation.

  • Adversarial inputs

+ Adversarial inputs are specially crafted inputs that have been developed with the aim of being reliably misclassified in order to evade detection. They are created to mislead machine learning models into making inaccurate and wrong predictions. Adversarial examples are inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake Vendors and defending organizations can defend against malicious Adversarial Machine Learning by generating many adversarial examples and then training the ML model to manage them properly. OK. No company wants this to happen, so who you gonna call? Now, with 40 years in computer technology, I saw 2 types of technicians:

1.     Some fight change.

2.     Others embrace it.

Only embracers will be up to this challenge!

  • Challenges of testing AI

+With 40 years in computer technology, issues in planning, design, development, testing, training, documentation, and support of computer technology have always been a problem. Smart folks don’t like mundane, repetitive stuff and rebuff it’s importance. This human characteristic will never change and must be constantly managed. So, this is a good AI application!

+ Challenges in AI testing seem to be primarily on AI vendors, but also on large companies, developing custom AI models, so challenges will include:

1.Complexity of AI models

2.Lack of standard testing frameworks

3.Data quality and bias

4.Interpreting AI decisions

5.Integrating AI testing into development lifecycle

  • Testing in machine learning

+ Machine learning testing involves evaluating and validating the performance of ML models to ensure correctness, accuracy, and robustness. Unlike traditional software testing, ML testing includes additional layers due to the complexity of ML models1. There are four major types of tests used in ML development:

1.Unit tests: on individual components with single responsibilities.

2.Integration tests: on combined functionality of individual components.

3.System tests: on the design of a system for expected outputs given inputs

4.In a controlled test-bed, use employees and actual test cases to prove the validity of the AI system

  • AI within wider systems

+Harvard Business News nailed it; “Design AI systems for humans, by humans. The leading companies in our research recognize that AI now allows them to build systems that talk, listen, see, and understand much the way we do. They know that tomorrow’s advantage will go to those who design systems that adjust to people — not those who continue to expect people to adjust to systems.”

CUSTOMER SERVICE has been losing to system requirements! If you’re like me, calling to get a digital answering service that, after 1-9 options, you’re not finding an option to meet your need, then holding for an inordinate time, only to get transferred to several departments and repeatedly, stating your question, finding only more confusion, AI must alleviate this situation!