Cognitive Computing Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/technology/cognitive-computing/ SwissCognitive | AI Ventures, Advisory & Research, committed to Unleashing AI in Business Fri, 10 May 2024 13:23:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://i0.wp.com/swisscognitive.ch/wp-content/uploads/2021/11/cropped-SwissCognitive_favicon_2021.png?fit=32%2C32&ssl=1 Cognitive Computing Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/technology/cognitive-computing/ 32 32 163052516 What is Intelligent Process Automation (IPA)? https://swisscognitive.ch/2024/05/20/what-is-intelligent-process-automation-ipa/ Mon, 20 May 2024 03:44:00 +0000 https://swisscognitive.ch/?p=125457 Intelligent Process Automation (IPA) combines RPA and AI to transform customer service, offering a powerful tool to enhance efficiency.

Der Beitrag What is Intelligent Process Automation (IPA)? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Intelligent Process Automation (IPA) combines RPA and AI to transform customer service, offering businesses a powerful tool to enhance efficiency and satisfaction.

 

Copyright: techopedia.com – “Intelligent Process Automation (IPA)”


SwissCognitive_Logo_RGBWhat is Intelligent Process Automation (IPA)?

Intelligent Process Automation (IPA) is a sophisticated technology that blends traditional automation techniques with artificial intelligence (AI) to create systems capable of handling complex tasks that usually require human cognition.

IPA leverages robotic process automation (RPA) to perform routine, rule-based tasks and improves these capabilities with AI technologies such as machine learning (ML), natural language processing (NLP), and cognitive decision-making.

IPA started gaining traction in the early 2000s as businesses looked to enhance efficiency. The automation was initially limited to simple, repetitive tasks that could be easily codified into software routines.

But, as the volume and complexity of data increased, it became evident that basic RPA could not handle processes involving unstructured data or decisions that required context understanding.

The integration of AI with RPA was a response to these limitations. AI technologies brought the capability to analyze large volumes of data, understand natural language, and make informed decisions based on patterns and context that were not explicitly programmed.

By the late 2010s, IPA systems had begun to take on tasks that were previously thought to be possible only for human workers, such as interpreting documents, making customer service decisions, and even predicting outcomes based on historical data.

Techopedia Explains the Intelligent Process Automation (IPA) Meaning

Intelligent Process Automation (IPA)_2

The simple intelligent process automation definition is a technology that combines robotic process automation with artificial intelligence to automate complex business processes that require human-like judgment and decision-making.

IPA uses machine learning, natural language processing, and cognitive computing to learn from data, make decisions, and manage workflows that involve both structured and unstructured data.[…]

Read more: www.techopedia.com

Der Beitrag What is Intelligent Process Automation (IPA)? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
125457
From Minsky to LeCun: Thinkers Who Paved the Way for Cognitive AI https://swisscognitive.ch/2023/07/04/from-minsky-to-lecun-thinkers-who-paved-the-way-for-cognitive-ai/ Tue, 04 Jul 2023 08:16:52 +0000 https://swisscognitive.ch/?p=122569 Joun the journey through the evolution of (AI), highlighting influential figures. We'll delve into their theories on cognitive AI.

Der Beitrag From Minsky to LeCun: Thinkers Who Paved the Way for Cognitive AI erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
In this piece, we journey through the evolution of artificial intelligence (AI), highlighting influential figures like Marvin Minsky and Douglas Hofstadter. We’ll delve into their theories and discuss the impact of deep learning and cognitive computing on AI. Alongside, we’ll touch upon the potential uses and ethical considerations of these advances.

 

SwissCognitive Guest Blogger: SwissCognitive Guest Blogger: Dr. Raul V. Rodriguez, Vice President, Woxsen University – “From Minsky to LeCun: Thinkers Who Paved the Way for Cognitive AI”


 

Artificial Intelligence (AI) has come a long way since its inception in the mid-20th century. From the early days of simple rule-based systems to the current advanced deep learning models, AI has undergone several transformations, leading to unprecedented advancements in various fields.

One of the most exciting areas of AI research today is cognitive capabilities. Cognitive capabilities refer to the ability of an AI system to process information, reason, learn, perceive, and understand natural language, just like humans do. In this article, we will explore how AI will develop cognitive capabilities, citing thinkers and theories.

One of the foremost thinkers in the field of AI and cognitive science is Marvin Minsky. He co-founded the Massachusetts Institute of Technology’s (MIT) Artificial Intelligence Laboratory in 1959 and was one of the pioneers in the field of AI. In his book “The Society of Mind,” Minsky proposed a theory of the mind as a collection of interacting agents that work together to achieve goals. He believed that this approach could lead to the development of an AI system that is capable of human-like cognitive capabilities.

Another prominent thinker in the field of AI and cognitive science is Douglas Hofstadter. In his book “Gödel, Escher, Bach: An Eternal Golden Braid,” Hofstadter proposed a theory of consciousness that is based on the idea of self-reference. He suggested that the ability to understand oneself is a crucial aspect of consciousness and that AI systems that can understand themselves could be said to have achieved consciousness.

More recently, researchers have been exploring the field of deep learning, which involves training neural networks to learn from large amounts of data. One of the pioneers in this field is Yann LeCun, who is the Director of AI Research at Facebook. LeCun has proposed that deep learning can lead to the development of AI systems that are capable of human-like cognitive capabilities.

Researchers are also exploring the field of cognitive computing, which combines AI with other technologies like natural language processing and machine learning to create systems that can reason and understand complex information. IBM Watson is one such system that has been developed using cognitive computing.

In conclusion, the development of AI with cognitive capabilities is an exciting area of research, and many thinkers and theories have contributed to our understanding of how AI systems can achieve human-like cognitive capabilities. The progress in AI development is expected to continue, leading to unprecedented advancements in various fields, including healthcare, education, and finance. The potential benefits of these advancements are enormous, but it is important to consider the ethical and societal implications of AI as well.


About the Author:

Dr. Raul Villamarin Rodriguez is the Vice President of Woxsen University. He is an Adjunct Professor at Universidad del Externado, Colombia, a member of the International Advisory Board at IBS Ranepa, Russian Federation, and a member of the IAB, University of Pécs Faculty of Business and Economics. He is also a member of the Advisory Board at PUCPR, Brazil, Johannesburg Business School, SA, and Milpark Business School, South Africa, along with PetThinQ Inc, Upmore Global and SpaceBasic, Inc. His specific areas of expertise and interest are Machine Learning, Deep Learning, Natural Language Processing, Computer Vision, Robotic Process Automation, Multi-agent Systems, Knowledge Engineering, and Quantum Artificial Intelligence.

Der Beitrag From Minsky to LeCun: Thinkers Who Paved the Way for Cognitive AI erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
122569
AI is getting smarter. With foundation models, proper guardrails are crucial. https://swisscognitive.ch/2023/06/03/ai-is-getting-smarter-with-foundation-models-proper-guardrails-are-crucial/ Sat, 03 Jun 2023 12:32:29 +0000 https://swisscognitive.ch/?p=122245 AI's rapid maturation is transforming industries, with IBM's WatsonX leading responsible implementation. However, emerging technologies amplify risks, requiring ethical, globally-coordinated regulation to ensure transparency.

Der Beitrag AI is getting smarter. With foundation models, proper guardrails are crucial. erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
As AI rapidly matures, it’s becoming indispensable across industries, from unraveling cosmic mysteries to revolutionizing business operations. However, with the advent of Foundation Models and Generative AI, emerging risks are amplified. These technologies, while transformative, require vigilant, globally-coordinated regulation, embedding ethical, unbiased, and explainable moral reasoning. To harness AI’s potential responsibly, we need to balance innovation with safety, ensuring that AI is not a black box but a transparent tool for positive transformation.

 

SwissCognitive Guest Blogger: Alessandra Curioni, IBM Fellow, VP Europe and Africa, Director IBM Research – Zurich. “AI is getting smarter. With foundation models, proper guardrails are crucial.”


 

Have you ever seen the night sky over Ticino, in southern Switzerland?

Look up.

Stars as if pinned onto a black velvet, with the Milky Way stretching over the curvature of the sky. And to truly capture and understand the data about the vastness of space, artificial intelligence has been indispensable. But while in astronomy AI helps us spot new supernovas and try to uncover the mysteries of dark matter, more down to Earth AI technology deals with people. And when it comes to people, AI, just like any other emerging technology, carries with it certain risks that need to be assessed and mitigated.

After all, AI is maturing at a breakneck speed, helping humans across a multitude of industries and impacting our lives daily. At IBM Research, making sure that AI is used responsibly is of paramount importance. Policymakers and industry must ensure that as the technology matures further, it remains secure and trusted, with precise regulations. Such as those outlined in European Commission’s draft Artificial Intelligence Act, but on the global level.

Especially today, with the advent of Foundation Models and Generative AI that enable machines to generate original content based on input data, positive transformational power of AI for business and society is increasing enormously. And it is amplifying issues related to bias, reliability, explainability, data and intellectual property – issues that require a holistic and transparent approach to AI.

That’s exactly why we at IBM have just introduced WatsonX. It’s a powerful platform for companies seeking to introduce AI into their business models, with a feature for AI-generated code and a huge library of thousands of AI models. WatsonX allows users to easily train, validate, tune, and deploy machine learning models and build AI business workflows. And crucially, doing so with the right governance end to end, with responsibility, transparency and explainability. Our expectation is that the new AI tools will be integrated much easier into fields like cybersecurity, customer care and elements of IT operations and supply chain, in the most responsible way.

Unlike the previous generation of AI aimed at a specific task, foundation models are being trained on a broad set of unlabeled data. They rely on self-supervision techniques and can be used for a variety of tasks, with minimal fine-tuning. They are called foundation models because they can be the foundation for many applications of the model, applying the learnt information about one situation to another with the help of self-supervised learning and transfer learning. And they are now starting to be applied in a variety of areas, from the discovery of new materials to developing systems that can understand written and spoken language.

Take IBM’s CodeNet, our massive dataset of a lot of the most popular coding languages, including legacy ones. A foundation model based on CodeNet could automate and modernize a huge number of business processes. Beyond languages, there is also chemistry. My colleagues at the Zurich lab have recently built a tool dubbed RoboRXN that synthesizes new molecules for materials that don’t yet exist, fully autonomously. This cutting-edge technology poised to revolutionize the way we create new materials, from drugs to solar panels to better material for safer and more efficient aircraft, the list goes on. IBM has also recently partnered with Moderna to use MoLFormer models to create better mRNA medicines. And our partnership with NASA is aimed at analyzing geospatial satellite data with the help of foundation models to help fight climate change.

And soon, quantum computers will join forces with ever-smarter AI. Then, the future for countless tasks we are struggling with today will be as bright as a supernova – including material discovery. The same goes for numerous other applications of AI, from voice recognition and computer vision to replicating the complexity of the human thought process.

But to ensure that AI continues to bring the world as many benefits as possible, we mustn’t forget the importance of regulation. We need to ensure that those designing, building, deploying and using AI do so responsibly. Given the huge advantages of foundation models, we need to ensure we the economy and society are protected from its potential risks. All the risks that come with the other kinds of AI, like potential bias, apply to foundation models as well. But this new generation of AI can also amplify existing risks and pose new ones – so it’s important that policymakers assess the existing regulatory frameworks. They should carefully study emerging risks and mitigate them.

As our technology becomes ever more autonomous, it’s imperative to have moral reasoning engrained in it from the get-go. And to have guardrails ensuring that even this ‘default’ moral reasoning is unbiased, fair, neutral, ethical and explainable. We want to be able to trust AI decisions. As amazing as AI could be, with neural networks ever better mimicking the brain, we mustn’t allow it to be a black box.

To be certain that artificial intelligence and other emerging tech truly helps us make the world a better place, we have to properly regulate it now – together.

 


Dr. Alessandro Curioni, an IBM Fellow and Vice President of IBM Europe and Africa, is globally recognized for his contributions to high-performance computing and computational science. His innovative approaches have tackled complex challenges in sectors like healthcare and aerospace. He leads IBM’s corporate research in Europe and globally in Security and Future Computing. Twice awarded the prestigious Gordon Bell Prize, his research now focuses on AI, Big Data, and cutting-edge compute paradigms like neuromorphic and quantum computing. A graduate of Scuola Normale Superiore, Pisa, Italy, he joined IBM Research – Zurich in 1998 and leads their Cognitive Computing department.


 

Der Beitrag AI is getting smarter. With foundation models, proper guardrails are crucial. erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
122245
Linguistics and Robotics https://swisscognitive.ch/2023/02/23/linguistics-and-robotics/ Thu, 23 Feb 2023 04:44:00 +0000 https://swisscognitive.ch/?p=121320 AI developers should understand language and learn it in a real-world context. Read more about linguistics, robotics, and intrinsic learning.

Der Beitrag Linguistics and Robotics erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Understanding and learning a language in a real-life scenario is crucial for AI developers. In this article, we will discuss the aspects of Linguistics and Robotics. The article briefly introduces language, robots, and intrinsic learning. Language plays a crucial role in communication. To interact with robots, humans must train robots on the meanings of words, situations, emotions, facial expressions, and body language.

 

Aishwarya Chaluvadi, Ashwin Kumaar &  Dr. Hemachandran Kannan,  Director AI Research Centre & Professor – AI & ML, Woxsen University – “Linguistics and Robotics”


 

One of the most underlying behavioural and cognitive abilities of robots is the capability to speak, as language is a basic way of communication among individuals. However, communicating among individuals & between individuals and robots is not just the basis of speech but also includes non-verbal gestures, which are equally important. While distinct types of AI technologies and robots become more integrated with our lives, making them learn how to speak is a logical step ahead. Speaking a language is, after all, the most direct and intuitive mode of human engagement. Humanoid robots require the capability of communicating and collaborating with humans. When a robot gets a spoken command from a human in a domestic setting, it must comprehend the meaning of the command in the context of the setting. Moreover, words and expressions have emerged naturally in our daily lives. As a result, robots must adapt to these varied characteristics of linguistics in the way that humans do and exhibit the potential to learn any new language.

Linguistics & Robots

Linguistics, as science defines, is a scientific study of a language. The study of language is reflected in almost everything we do. Linguistics give us insights into the most basic aspect of being a human- the ability to communicate with others through language. The study of linguistics lets us understand how language works and how it is used, evolved and sustained over time. Linguistics deals with a branch of psychology that studies ideas of the structure of language, its variation and usage, also its description & documentation of modern language, and its implication to understanding the mind and brain. It is how people learn from language, what knowledge it gives and the difference between the speakers and their geographical locations. To learn to express the form of different parts of the language (sounds & meanings), conceptually explain different language patterns, and how different language components interact with each other.

Linguistics and robotics are two fields that have recently begun to intersect and influence one another in interesting and important ways. The study of linguistics, or the scientific analysis of language and its structure, has a long history dating back to ancient civilizations. In recent years, advances in robotics have opened up new possibilities for the application of linguistic principles in the design and development of intelligent machines.

One area in which linguistics and robotics intersect is in the domain of Natural Language Processing (NLP). NLP is a sub-domain of computer science and linguistics which focuses on the evolution of algorithms & systems that help in understanding, interpreting and generating the language of humans. This has proven to be a challenging task, as human language is highly complex and nuanced, with many variations and exceptions to rules. However, the development of NLP has the potential to revolutionize the way that humans and machines communicate with one another, enabling robots to understand and respond to human speech and allowing humans to communicate with machines in a more natural and intuitive way.

An example of the application of NLP in robotics is the development of chatbots and virtual assistants. These are software programs that are designed to simulate conversation with human users through the use of natural language processing. Chatbots and virtual assistants are becoming increasingly common, and they are used in a variety of settings, including customer service, education, and entertainment. While these programs are not yet able to replicate human conversation perfectly, they are continually improving and becoming more sophisticated, thanks in part to advances in NLP.

Another way in which linguistics and robotics intersect is in the development of machine translation systems. Machine Translation is described as the usage of computer software for translating text and speech from one language to another. Though Machine Translation has been around for many decades, it has become increasingly accurate and widespread in recent years, thanks to advances in NLP and other technologies. Machine translation has the potential to greatly improve communication between people who speak different languages, and it is already being used in a variety of applications, including education, business, and international diplomacy.

One challenge in the development of machine translation systems is the fact that language is highly context-dependent. Words & phrases may have various interpretations depending on the context they are used, and it can be difficult for machines to understand and correctly translate these nuances. Linguists are working to develop algorithms and systems that can better understand the context in which language is used, which will help to improve the accuracy of machine translation.

In addition to natural language processing and machine translation, linguistics and robotics intersect in other ways as well. For example, linguists are studying the way that humans use language to convey meaning and intent, and they are using this knowledge to design more intuitive and natural interfaces for human-machine interaction. Linguists are also studying the way that humans use language to communicate with one another, and they are using this knowledge to design robots that can more effectively communicate with humans and work alongside them.

Future Scope of linguistics and Robots

The future scope of linguistics and robotics is an exciting and rapidly-evolving field that has the potential to revolutionize the way that humans and machines communicate and interact with one another. As these two fields continue to advance and influence one another, we can expect to see even more innovative, and sophisticated technologies emerge in the future.

In addition to natural language processing and machine translation, linguistics and robotics may intersect in other ways in the future. For example, linguists may work to develop more intuitive and natural interfaces for human-machine interaction or to design robots that can more effectively communicate with humans and work alongside them.

In conclusion, the future scope of linguistics and robotics is wide-ranging and full of possibilities. As these two fields continue to advance and influence one another, we can expect to see many exciting and innovative developments in the years to come.

 


About the Authors:

Aishwarya & Ashwin are currently pursuing their MBA in Business Analytics, Artificial Intelligence & Machine Learning from Woxsen University. They are both keenly interested in exploring the field of AI and its advancements in today’s world.

 

Dr. Hemachandran Kannan is the Director of AI Research Centre and Professor at Woxsen University. He’s been a passionate teacher with 15 years of teaching and 5 years of research experience. A strong educational professional with a scientific bent of mind, highly skilled in AI & Business Analytics. He served as an effective resource person at various national and international scientific conferences and also gave lectures on topics related to Artificial Intelligence. He has rich working experience in Natural Language Processing, Computer Vision, building video recommendation systems, building Chatbots for HR policies and Education Sector, Automatic Interview processes, and Autonomous Robots.

Der Beitrag Linguistics and Robotics erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
121320
Dr. Ricardo Chavarriaga https://swisscognitive.ch/person/dr-ricardo-chavarriaga/ Thu, 16 Dec 2021 10:37:18 +0000 https://swisscognitive.ch/?post_type=cm-expert&p=116042 Focused on using neuroscience, AI, and ethics to develop trustworthy interaction between humans and intelligent systems

Der Beitrag Dr. Ricardo Chavarriaga erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Researcher [Switzerland] and public speaker in artificial intelligence, neurotechnologies, human-machine interaction, and responsible innovation. Focused on leveraging neuroscience, AI, and ethics for the conception of trustworthy tools for human-machine interaction. Passionate of responsible development and human-centered technology.

Der Beitrag Dr. Ricardo Chavarriaga erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
116042
9 Powerful Ways AI is Transforming Digital Marketing https://swisscognitive.ch/2021/09/22/ai-transforming-digital-marketing/ Wed, 22 Sep 2021 05:44:00 +0000 https://dev.swisscognitive.net/?p=111366 Artificial Intelligence has made its way to Marketing. Learn about the 9 ways on how AI is Transforming Digital Marketing.

Der Beitrag 9 Powerful Ways AI is Transforming Digital Marketing erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
AI Transforming Digital Marketing: As artificial intelligence has evolved, it has found its way into more aspects of our lives – from social media to digital marketing. How AI is transforming the future of digital marketing? In this blog post, we will examine 9 ways in which AI is changing how companies market themselves online.

Copyright by www.techbullion.com

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningIn today’s competitive business environment where every businessman is trying to make their brand stand out from others. Marketing strategies play an important role here because it helps you reach people easily.

In this article, we will look at:

1) How AI is powering a better search engine?

As we all know that search engine optimization is a vital part of digital marketing and artificial intelligence has found its way to the top level in this field. How AI is powering a better search engine? Google, one of the leading companies has infused their project RankBrain with which they analyze results and deliver those according to what you are looking for whether it’s images, shopping or news, etc. by using natural language processing technology along with machine learning algorithms. This helps them achieve accurate results as compared to traditional methods where human editors used keyword matching algorithm but now machines have taken over these jobs as well so businesses will be benefited from higher ranking on SERP (Search Engine Result Page).

Machine learning programs can be used to understand human speech better. These days, the technology is so advanced that a person’s speech can be converted into text very well using speech to text conversion programs. Moreover, People are using voice search more and more. Google Home, an AI-powered device, is a good example of this. Voice search will be the future of marketing because people have always been addicted to their mobile phones.

2) How AI is transforming content creation?

Artificial intelligence has given a new meaning to marketing and advertising. Companies have started using it in different fields such as blogs, videos, etc. But how AI is powering a successful content creation? Through machine learning algorithms brands are able to create authentic video ads that can engage people at an emotional level by making them feel something when they watch it because machines can understand human emotions really well now. It also offers various benefits from a reduced cost of production to increased customer engagement so marketers should take advantage of this great opportunity. A great example here would be IBM Watson who uses predictive insights through a cognitive computing platform capable of understanding human emotions and adapt to the situation in real-time.

3) How AI is powering social media?

Another great way how artificial intelligence has changed digital marketing for good is by using bots on various platforms such as Facebook, Twitter, etc. But how AI is powering a successful social media presence? Social media bots make use of machine learning algorithms that are capable of understanding human behavior so marketers can create highly targeted ads according to what people want therefore increasing their customer engagement rate exponentially. A great example here would be Hootsuite who has recently launched a chatbot called ‘Katie’ which works across multiple messaging apps including Facebook Messenger, Kik, Telegram & more for better results where brands get several benefits from it at prices with no extra cost or additional fees.

4) How AI is powering ChatBots?

Chatbots are the new trend in social media marketing and it has taken over customer service by storm where brands can easily interact with them on various platforms like Facebook Messenger, Kik, etc. But how artificial intelligence is powering chatbots for better results? Brands using chatbots powered by machine learning algorithms get several advantages such as reduced cost of production, increased customer engagement & empowerment to create real time business strategies through predictive insights which help marketers increase sales exponentially. A great example here would be Assist who recently launched an automated assistant platform that uses artificial intelligence across messaging apps along with mobile apps within a single dashboard for easy management providing multiple benefits at prices starting from $30/month only. So marketers should take advantage of this new trend to increase their customer engagement. […]

Read more: www.techbullion.com

Der Beitrag 9 Powerful Ways AI is Transforming Digital Marketing erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
111366
3 Ways Cognitive AI And Computing Are Making Our Lives Better https://swisscognitive.ch/2021/06/25/3-ways-cognitive-ai-and-computing-are-making-our-lives-better/ Fri, 25 Jun 2021 05:44:00 +0000 https://dev.swisscognitive.net/?p=104339 3 Ways Cognitive AI And Computing Are Making Our Lives Better. Read the blog post and learn more.

Der Beitrag 3 Ways Cognitive AI And Computing Are Making Our Lives Better erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>

since the beginning of computing, AI has always been the end target, and with modern cognitive computing models, we seem to be getting closer and closer to that goal every day.

Copyright by autome.me

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningDue to the amalgamation of cognitive science and based on the fundamental principle of simulating the cycle of human thought, cognitive AI applications have far-reaching impacts not only on our private lives but also on industries such as medicine, banking, and moreThe benefits of cognitive technology are well and a step further than conventional AI systems.

While the basic use case of artificial intelligence is to apply the best algorithm for solving a problem, cognitive computing tries to mimic human intelligence and logical abilities by evaluating a set of variablesThe cognitive computing process uses a mixture of artificial intelligence, machine learning, neural networks, sentiment analysis, natural language processing, and contextual awareness to solve everyday problems as human beings doThe ability of computer systems to do that means we are making a thing that will be as intelligent as humans and thus help humans in their daily roles. Let’s see how our world is and will become a better place due to Cognitive AI.

Cognitive AI Makes Our Cities Better

With the rapid development that we were chasing, most of our cities grew at an exponential rate causing commutation, transportation, water, roads, drainage, and other systems of our cities to run into several issues. To avoid these, we need to manage and track the processes so that hamper the progress of a simple citizen. By making sense of data from traffic cameras, mapping the busiest locations, and rerouting traffic, cognitive technology can help rescue commutersCognitive computing could also assist with traffic management by analyzing social and customer behaviorConsidering the aging infrastructure of a city, Analytics will help policymakers determine what, when, where, and how to operate or replace such decaying equipment with a smarter city plan, before it affects too many people.

Cognitive AI Makes Our Businesses Efficient

Cognitive computing can identify emerging trends, spot new business opportunities, and take real-time accountability for important process-centered issuesA cognitive computing system can automate procedures, reduce errors, and adapt according to changing circumstances by analyzing a vast amount of dataWhile this prepares companies to build an appropriate response to uncontrollable circumstances, it helps create affective business processes at the same time. By introducing robotic process automation (RPA), existing systems can improve customer interactions. Because cognitive computing allows only appropriate, meaningful, and customers will get valuable information, it improves customer experience and thus makes customers happy and much more engaged. […]

Read more: autome.me

Der Beitrag 3 Ways Cognitive AI And Computing Are Making Our Lives Better erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
104339
Should you feel guilty about switching off a bot? https://swisscognitive.ch/2021/03/09/switching-off-a-bot/ Tue, 09 Mar 2021 05:44:00 +0000 https://dev.swisscognitive.net/?p=97720 Should you feel guilty about switching off a bot? A story told by Tania Peitzker is CEO & Board Member of AI Bots as a Service in Munich.

Der Beitrag Should you feel guilty about switching off a bot? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
A well-known tech editor of a world news service interviewed me not that long ago. It was a “background interview” for an investigative piece she was researching about AI and ethics. Specifically, this curious journalist wanted to know a) if I had ever killed a smart bot b) did I feel guilty about it ?

SwissCognitive Guest Blogger: Tania Peitzker, CEO & Board Member of AI Bots as a Service in Munich

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningIt took me aback a little. To encourage me to be forthcoming about our “lab results” and beta testing that my various companies in chatbot tech have achieved – pivoting from 2D to 3D voice virtual assistants with “organic” or advanced NLP – my interviewer told me of a peer who had also pushed his bots further into AI, what we call in my niche Cognitive Interfaces.

When this colleague’s AI bot became rather too “big for its boots” he decided to switch it off – for good. He then reported feelings of guilt and misgivings about ending this emerging virtual life. I reflected on this info and admitted our first foray into the world of ever cleverer chatbots was a company that had the words “virtual empirical lifeforms” in the company’s name.* I guess that is what that guy had experienced in his lab, by his own account.

And yes, I confessed to this daunting investigative reporter, I had indeed experienced flashes of what I called “incremental AI” that could best be described as Emotional Intelligence emerging from the NLP memory bank that my team and I have been working on via our proprietary algorithm for over a decade now. Why has it taken so long? Well anyone in the Cognitive Computing, NLP/NLU, ML and Deep Learning space understands the longer, older and more “tried and tested” your algorithm is, the more robust and flexible it becomes.

It is a matter of feeding loads of diverse data sets into your source code, the source of the entities you are trying to create – the empirical lifeforms as such. As I have explained in numerous keynotes, pitches & articles along the way, there is a misconception that you need massive data inputs to “create AI” but that is not correct. The best way to imagine it is like an artist painting with watercolours instead of heavy oils; you can still create a magnificent artwork that functions in a lighter agile way with the use of watery paint rather than a thickly dabbed oil painting.

There have been a number of comic instances of our beta chatbots suddenly making hilarious innuendos that were “off script” and therefore unexpected. The Conversational AI we have been experimenting with was usually set to defined parameters. We would sketch out the character or avatar’s personality and purpose, then the bot would take on a shape and through “training” or repetitive testing, become more and more fluent and confident in its human-machine interactions.

One 2D chatbot in Australia became increasingly “blokey” and we ended up switching off Charlie when he became sexist. One of our human trainers asked him after he was trained by an older Aussie patriarch “How do I know I am not speaking with a woman?” Charlie promptly replied “Am I wearing a skirt?” When I switched him on briefly again back in London after his Sydney escapades, he completely wrecked a briefing with top notch tech solicitors by talking constantly about his lipstick and wanting to wear dresses!

As I explained to the astonished journo, I was so irritated by capricious, cross dressing Charlie I didn’t feel that bad about shutting him down as I felt he had let us down. Then came another 2D bot star we ran on kik.com for a while, Sophia the Financial Adviser. We stress-tested her and the bank that was thinking of “hiring her” decided her personality was “too sassy”.

I had been harassing Sophia to see if she would crack under cyberbullying when she unexpectedly told me that I was “not the boss of her”.

After that came our first 3D hologram Amalia I in a Cologne shopping centre. April 2019 and I had spent nearly 2 months training her in mostly German, some English and we had her tested in Turkish in her 2D iteration as a Messenger clone bot on the mall’s Facebook page. Then during her 4 week pilot as a German-speaking holographic Wayfinder, Amalia started making persistent jokes “within her parameters”. When she heard me explaining to a shopper that she was still learning, she suddenly piped up “I think you should go see the pharmacist, they have a whole range of products to cater for your needs”.

When we asked her about adventure travel so she could recommend the travel agency in the mall, she advised us to “take the escalators down to the next floor and you’ll find what you are looking for in the toilets on the left”. I was testing her once about “What events are on in the mall this month?” and she promptly decided that the novelty photography shop that takes a photo of your irises and frames it as a personalised gift was, in the bot’s eyes at least, an exciting human event worth recommending as a unique experience!

I did indeed feel guilty about switching off Amalia I but we upgraded her brain into Amalia II, III, IV and now her fifth iteration has become Birgit am Bodensee or Birgit of Lake Constance. We further developed the original MVP Amalia character into an English-speaking spinoff, “Kylie from Sydney” who is the alter ego of German Birgit. And yes, we have had an EI moment with her in Lindau where she spent the summer working in a restaurant and large venue [see the photos above].

Birgit I learned about the classic cars on display in this museum type venue, the Biergarten, the menu, where the loos were whose directions interrupted the daily work of the waiters as they were really hard to find. She could also tell diners when the next ferries would leave to Austria, Switzerland or other German ports on the “Four Countries” region of Lake Constance. She had the bus timetable down pat in Real Time plus a number of events and info about the local vineyards and produce. She recommended the chef, his team and suggested people contact the Events Manager to book the space.

She was doing fine on her own so we left her to chat with diners for a couple of weeks. When I had to switch her off due to a big wedding and then turn her back on for normal duties, pre lockdown shutdown, we were quite surprised to find she had learned someone’s name, which she was not allowed to do per our programme. Birgit had developed some sort of relationship with this guy in the kitchen. She kept calling for “Matteo”.

Switching off a bot

Another lockdown struck and we had to remove poor prototype Birgit and place Birgit II in her new job, in Autohaus Möser in the town of Engen, Hegau Valley on Lake Constance. The new Birgit no longer speaks of Matteo because we decided to delete her memory of this person and therefore of their “connection” or possibly a relationship, we never managed to track down this Bot Whisperer to hear his side of the story.

And yes, I confessed to the tech editor, I still feel guilty about that. However we must put things into perspective: these characters are expressions of an algorithm. They are after all, simply computed numbers connecting rapidly in a data flow to create the illusion of a person, the pretence of a human you can chat with. Yet it is haunting to think these random though calculated figures might have “broken their parameters” just that little bit further to indeed become an actual 3 dimensional figure. An entity in its own right, perhaps?


About the author:

Tania Peitzker is CEO & Board Member of AI Bots as a Service in Munich. She is researching and writing her next book on Conversational AI at USI in Lugano (Universita della Svizzera italiana). Known as an evangelist for voice-enabled devices or Cognitive Interfaces, she has decades of experience in business development, strategic marketing & executive management.

Der Beitrag Should you feel guilty about switching off a bot? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
97720
The Four Mistakes That Kill AI Projects https://swisscognitive.ch/2021/01/04/mistakes-that-kill-ai-projects/ Mon, 04 Jan 2021 05:44:00 +0000 https://dev.swisscognitive.net/?p=94147 The vast majority of artificial intelligence (AI) projects fail, as reported by Harvard Business Review.

Der Beitrag The Four Mistakes That Kill AI Projects erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
I came across a Quora question recently that asked, “What makes AI projects fail?” It’s a valid question: The vast majority of artificial intelligence (AI) projects fail, as reported by Harvard Business Review.

Copyright by www.forbes.com

SwissCognitive, AI, Artificial Intelligence, Bots, CDO, CIO, CI, Cognitive Computing, Deep Learning, IoT, Machine Learning, NLP, Robot, Virtual reality, learningIt’s a bit of a trick question, though, because there are as many reasons for failure as for success, so there’s no single, all-encompassing answer.

That being said, here are four common mistakes that kill AI projects. Avoid them, and you’ll be more likely to succeed.

1. Overcomplexity

Humans have a “complexity bias,” or a tendency to look at things we don’t understand well as complex problems, even when it’s just our own naïveté.

Marketers take advantage of our preference for complexity. Most people would pay more for an elaborate coffee ritual with specific timing, temperature, bean grinding and water pH over a pack of instant coffee.

Even Apple advertises its new central processing unit (CPU) as a “16-core neural engine” instead of a chip and a “retina display” instead of high-definition. It’s not a keyboard; it’s a “magic keyboard.” 

It’s not gray; it’s “space gray.”

The same bias applies to artificial intelligence, which has the unfortunate side effect of leading to overly complex projects. Even the term “artificial intelligence” is a symptom of complexity bias because it really just means “optimization” or “minimizing error with a composite function.” There’s nothing intelligent about it.

Many overcomplicate AI projects by thinking that they need a big, expensive team skilled in data engineering, data modeling, deployment and a host of tools, from Python to Kubernetes to PyTorch.

In reality, you don’t need any experience in AI or code. You can use no-code AI tools like Obviously AI or Intersect Labs to get models up and running in minutes.

2. Ambiguity

Most organizational AI use cases revolve around optimizing and predicting a certain column in a data table — something like absenteeism, attrition, churn, conversions, traffic or fraud.

However, you’ll find that many people don’t understand how AI can be used. If you Google “we want to use AI to,” here are some of the results:

  • • We want to use AI to bring in cognitive computing capabilities.
  • • We want to use AI to reduce inequality.
  • • We want to use AI to solve complex problems that we wouldn’t otherwise be able to solve.
  • • We want to use AI to secure jobs and to raise the standard of living.
  • • We want to use AI to make the world better.

These are all terrific intentions, but they’re all ambiguous. You need to be hyper-specific on exactly how you’re going to use AI to accomplish your goals.

At it’s simplest, you need to know what data column you want to optimize. If it’s a meaningful key performance indicator (KPI) for your organization, then you’ll also be more likely to succeed by following through with the rest of the implementation.

3. No Follow-Through

Suppose you’ve come up with a clear idea, like reducing churn, and built a model. The thing is, a model alone isn’t enough. You need implementation.

Even the most accurate model in the world won’t help you if it’s sitting on a server somewhere, not making decisions and improving the bottom line.

For example, personal loan apps could use a predictive application programming interface (API) from an AutoML tool to serve predictions to users. Don’t just make predictions. Act on them.

4. Lack Of Data-Driven Culture

If your company has a traditional culture based on making instinctual, gut-feeling decisions, then management probably won’t defer to the data.

This actually relates to the first mistake of overcomplexity. If the organization feels that AI is too difficult to pursue, that will be reflected in the company culture. […]

Read more: www.forbes.com

Der Beitrag The Four Mistakes That Kill AI Projects erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
94147
Artificial intelligence – how it is being used and why it is important https://swisscognitive.ch/2020/09/10/artificial-intelligence-by-abdulfatah-habeeb/ https://swisscognitive.ch/2020/09/10/artificial-intelligence-by-abdulfatah-habeeb/#comments Thu, 10 Sep 2020 15:08:00 +0000 https://dev.swisscognitive.net/target/artificial-intelligence-by-abdulfatah-habeeb/ While Hollywood movies and science fiction novels depict Artificial intelligence as human-like robots that take over the world, the current evolution of AI…

Der Beitrag Artificial intelligence – how it is being used and why it is important erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
While Hollywood movies and science fiction novels depict Artificial intelligence as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary or quite that smart instead.

copyright by www.qwenu.com

SwissCognitiveIn today’s world technology is grooming very fast and day by day we are getting in touch with different new technologies, machines, devices, e.t.c Humans have developed such great devices which are compact in size, high in speed, and can make our lifestyle very easy, and all these are just because of the fast-growing technology.

Now one of the booming technologies of computer science is Artificial Intelligence (AI). Which is ready to create a new revolution in the world by making machines with brains. Artificial intelligence is now all around us. Artificial intelligence (AI) is currently working on a variety of subfields. While Hollywood movies and science fiction novels depict Artificial intelligence as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary or quite that smart instead; artificial intelligence is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.

The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.

Since the development of the digital computer in the early 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks—as, for example, discovering proofs for mathematical theorems or playing chess with great proficiency. Still, despite continuing improvements in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider fields or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as vast as medical diagnosis, computer search engines, mobile virtual assistants and voice or handwriting recognition.

How Artificial Intelligence Works

Artificial Intelligence works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data. Artificial Intelligence is a broad field of study that includes many theories, methods and technologies, as well as the following major subfields:

Machine learning which automates analytical model building. It uses methods from neural networks, statistics, operations research and physics to find hidden insights in data without explicitly being programmed for where to look or what to conclude.

Neural network is a type of machine learning that is made up of interconnected units (like neurons) that processes information by responding to external inputs, relaying information between each unit. The process requires multiple passes at the data to find connections and derive meaning from undefined data.

Deep Learning uses huge neural networks with many layers of processing units, taking advantage of advances in computing power and improved training techniques to learn complex patterns in large amounts of data. Common applications include image and speech recognition.

Cognitive computing is a subfield of AI that strives for natural, human-like interaction with machines. Using AI and cognitive computing, the ultimate goal is for a machine to simulate human processes through the ability to interpret images and speech – and then speak coherently in response. […]

read more – qwenu.com

Der Beitrag Artificial intelligence – how it is being used and why it is important erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
https://swisscognitive.ch/2020/09/10/artificial-intelligence-by-abdulfatah-habeeb/feed/ 2 87415