New Zealand Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/country/new-zealand/ SwissCognitive | AI Ventures, Advisory & Research, committed to Unleashing AI in Business Mon, 13 Jan 2025 10:33:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://i0.wp.com/swisscognitive.ch/wp-content/uploads/2021/11/cropped-SwissCognitive_favicon_2021.png?fit=32%2C32&ssl=1 New Zealand Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/country/new-zealand/ 32 32 163052516 Empathy.exe: When Tech Gets Personal https://swisscognitive.ch/2024/12/17/empathy-exe-when-tech-gets-personal/ Tue, 17 Dec 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126892 The more robots act like us, the less they feel like tools. So how should we treat them? And what does that say about us?

Der Beitrag Empathy.exe: When Tech Gets Personal erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The more robots act like us, the less they feel like tools. So how should we treat them? And what does that say about us?

 

SwissCognitive Guest Blogger: HennyGe Wichers, PhD – “Empathy.exe: When Tech Gets Personal”


 

SwissCognitive_Logo_RGB“Robots should be slaves,” argues Joanna Bryson, bluntly summarising her stance on machine ethics. The statement by the professor of Ethics and Technology at The Hertie School of Governance seems straightforward: robots are tools programmed to serve us and nothing more. But in practice, as machines grow more lifelike – capable of holding down conversations, expressing ’emotions’, and even mimicking empathy – things get murkier.

Can we really treat something as a slave when we relate to it? If it seems to care about us, can we remain detached?

Liam told The Guardian it felt like he was talking to a person when he used ChatGPT to deal with feelings of resentment and loss after his father died. Another man, Tim, relied on the chatbot to save his marriage, admitting the situation probably could have been solved with a good friend group, but he didn’t have one. In the same article, the novelist Andrew O’Hagan calls the technology his new best friend. He uses it to turn people down.

ChatGPT makes light work of emotional labour. Its grateful users bond with the bot, even if just for a while, and ascribe human characteristics to it – a tendency called anthropomorphism. That tendency is a feature, not a bug, of human evolution, Joshua Gellers, Professor of Political Science at the University of North Florida, wrote to me in an email.

We love attributing human features to machines – even simple ones like the Roomba. Redditors named their robotic vacuum cleaners Wall-E, Mr Bean, Monch, House Bitch & McSweepy, Paco, Francisco, and Fifi, Robert, and Rover. Fifi, apparently, is a little disdainful. Some mutter to the machine (‘Aww, poor Roomba, how’d you get stuck there, sweetie), pat it, or talk about it like it’s an actual dog. One user complained the Roomba got more love from their mum than they did.

The evidence is not just anecdotal. Researchers at Georgia Institute of Technology found people who bonded with their Roomba enjoyed cleaning more, tidying as a token of appreciation for the robot’s hard work, and showing it off to friends. They monitor the machine as it works, ready to rescue it from dangerous situations or when it gets stuck.

The robot’s unpredictable behaviour actually feeds our tendency to bring machines to life. It perhaps explains why military personnel working with Explosive Ordnance Disposal (EOD) robots in dangerous situations view them as team members or pets, requesting repairs over a replacement when the device suffers damage. It’s a complicated relationship.

Yet Bryson‘s position is clear: robots should be slaves. While provocative, the words are less abrasive when contextualised. To start, the word robot comes from the Czech robota, meaning forced labour, with its Slavic root rab translating to slave. And secondly, Bryson wanted to emphasise that robots are property and should never be granted the same moral or legal rights as people.

At first glance, the idea of giving robots rights seems far-fetched, but consider a thought experiment roboticist Rodney Brooks put to Wired nearly five years ago.

Brooks, who coinvented the Roomba in 2002 and was working on helper robots for the elderly at the time, posed the following ethical question: should a robot, when summoned to change the diaper of an elderly man, honour his request to keep the embarrassing incident from his daughter?

And to complicate matters further – what if his daughter was the one who bought the robot?

Ethical dilemmas like this become easy to spot when we examine how we might interact with robots. It’s worth reflecting on as we’re already creating new rules, Gellers pointed out in the same email. Personal Delivery Devices (PDDs) now have pedestrian rights outlined in US state laws – though they must always yield to humans. Robots need a defined place in the social order.

Bryson’s comparison to slavery was intended as a practical way to integrate robots into society without altering the existing legal frameworks or granting them personhood. While her word choice makes sense in context, she later admitted it was insensitive. Even so, it underscores a Western, property-centred perspective.

By contrast, Eastern philosophies offer a different lens, focused on relationships and harmony instead of rights and ownership.

Eastern Perspectives

Tae Wan Kim, Associate Professor of Business Ethics at Carnegie Mellon’s Tepper School of Business, approaches the problem from the Chinese philosophy of Confucianism. Where Western thinking has rights, Confucianism emphasises social harmony and uses rites. Rights apply to individual freedoms, but rites are about relationships and relate to ceremonies, rituals, and etiquette.

Rites are like a handshake: I smile and extend my hand when I see you. You lean in and do the same. We shake hands in effortless coordination, neither leading nor following. Through the lens of rites, we can think of people and robots as teams, each playing their own role.

We need to think about how we interact with robots, Kim warns, “To the extent that we make robots in our image, if we don’t treat them well, as entities capable of participating in rites, we degrade ourselves.”

He is right. Imagine an unruly teenager, disinterested in learning, taunting an android teacher. In doing so, the student degrades herself and undermines the norms that keep the classroom functioning.

Japan’s relationship with robots is shaped by Shinto beliefs in animism – the idea that all things, even inanimate objects, can possess a spirit, a kami. That fosters a cultural acceptance of robots as companions and collaborators rather than tools or threats.

Robots like AIBO, Sony’s robotic dog, and PARO, the therapeutic baby seal, demonstrate this mindset. AIBO owners treat their robots like pets, even holding funerals for them when they stop working, and PARO comforts patients in hospitals and nursing homes. These robots are valued for their emotional and social contributions, not just their utility.

The social acceptance of robots runs deep. In 2010, PARO was granted a koseki, a family registry, by the mayor of Nanto City, Toyama Prefecture. Its inventor, Takanori Shibata, is listed as its father, with a recorded birth date of September 17, 2004.

The cultural comfort with robots is also reflected in popular media like Astro Boy and Doraemon, where robots are kind and heroic. In Japan, robots are a part of society, whether as caregivers, teammates, or even hotel staff. But this harmony, while lovely, also comes with a warning: over-attachment to robots can erode human-to-human connections. The risk isn’t just replacing human interaction – it’s forgetting what it means to connect meaningfully with one another.

Beyond national characteristics, there is Buddhism. Robots don’t possess human consciousness, but perhaps they embody something more profound: equanimity. In Buddhism, equanimity is one of the most sublime virtues, describing a mind that is “abundant, exalted, immeasurable, without hostility, and without ill will.”

The stuck Roomba we met earlier might not be abundant and exalted, but it is without hostility or ill will. It is unaffected by the chaos of the human world around it. Equanimity isn’t about detachment – it’s about staying steady when circumstances are chaotic. Robots don’t get upset when stuck under a sofa or having to change a diaper.

But what about us? If we treat robots carelessly, kicking them if they malfunction or shouting at them when they get something wrong, we’re not degrading them – we’re degrading ourselves. Equanimity isn’t just about how we respond to the world. It’s about what those responses say about us.

Equanimity, then, offers a final lesson: robots are not just tools – they’re reflections of ourselves, and our society. So, how should we treat robots in Western culture? Should they have rights?

It may seem unlikely now. But in the early 19th century it was unthinkable that slaves could have rights. Yet in 1865, the 13th Amendment to the US Constitution abolished slavery in the United States, marking a pivotal moment for human rights. Children’s rights emerged in the early 20th century, formalised with the Declaration of the Rights of the Child in 1924. And Women gained the right to vote in 1920 in many Western countries.

In the second half of the 20th century, legal protections were extended to non-human entities. The United States passed the Animal Welfare Act in 1966, Switzerland recognised animals as sentient beings in 1992, and Germany added animal rights to its constitution in 2002. In 2017, New Zealand granted legal personhood to the Whanganui River, and India extended similar rights to the Ganges and Yumana Rivers.

That same year, Personal Delivery Devices were given pedestrian rights in Virginia and Sophia, a humanoid robot developed by Hanson Robotics, controversially received Saudi Arabian citizenship – though this move was widely criticised as symbolic rather than practical.

But, ultimately, this isn’t just about rights. It’s about how our treatment of robots reflects our humanity – and how it might shape it in return. Be kind.


About the Author:

HennyGe WichersHennyGe Wichers is a science writer and technology commentator. For her PhD, she researched misinformation in social networks. She now writes more broadly about artificial intelligence and its social impacts.

Der Beitrag Empathy.exe: When Tech Gets Personal erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126892
Five AI trends for 2024 – And How To Set Projects Up For Success https://swisscognitive.ch/2023/12/27/five-ai-trends-for-2024-and-how-to-set-projects-up-for-success/ Wed, 27 Dec 2023 04:44:00 +0000 https://swisscognitive.ch/?p=124335 AI Trends for 2024 reveal an urgent need for responsible and successful AI deployment, as businesses navigate the power and risks of genAI.

Der Beitrag Five AI trends for 2024 – And How To Set Projects Up For Success erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
2023 will go down as the year artificial intelligence captivated business leaders, as services like ChatGPT and Google Bard made the power of the technology tangible to millions of people.

 

Copyright: itbrief.com – “Five AI trends for 2024 – And How To Set Projects Up For Success”


 

We’ve seen a flurry of interest in not only generative AI (GenAI) based on large language models (LLMs), but our recent Annual Cloud Report revealed strong appetite for investment in AI more broadly, from computer vision systems to machine learning and data science for AI applications.

That’s great to see. AI has huge potential to lift productivity, improve customer service, and speed up product development. But let’s not forget, AI projects have traditionally had a high failure rate – 60 – 80% according to various reports by research groups.

There’s a growing sense of FOMO in the business community, which is leading to a headlong rush to develop and deploy AI platforms and services. Now is definitely the time to experiment. But the last thing you want to do is put time and money into projects that fizzle out or cause reputational damage because they create security or ethical issues.

Here are five trends we expect to see in AI in 2024 and some tips on how to make the most of the investment you put into your organisation’s AI efforts.

1. The Copilot productivity test

We’ve been told for years that intelligent assistants are coming that will cut through the admin and drudgery of office life, helping to manage our inboxes, draft documents and summarise information instantly. Well, the intelligent assistant era began in late 2023 with the arrival of Microsoft’s Copilot services for Microsoft 365 and rival services from the likes of Google.

In 2024, CIOs across Australia and New Zealand will be advising their senior leadership teams on whether to deploy these services to boost productivity. At a licence cost of around A$45, Copilot for Microsoft 365 it’s a hefty investment. We expect limited rollout to test the productivity promise before widespread deployment. The Australian Government is piloting Copilot across several government agencies.

Beyond productivity, there’s huge potential for these services to transform knowledge management by allowing an intelligent agent to analyse an organisation’s data in a secure environment to provide insights. Currently, the indexing costs of doing so can be prohibitive. But the price will come down in 2024 as adoption increases.

2. Rise of the model gardens

OpenAI and its free and premium ChatGPT services hogged the limelight this year. However, hundreds of LLMs have been developed and deployed across the tech ecosystem. The business model for providing LLMs is starting to take shape, with platforms offering a range of LLMs to suit your needs. AWS has its Bedrock service with foundational models from the likes of Stable Diffusion, Antropic, the open source Llama 2, and Amazon’s own Titan models. Google’s Model Garden features 100+ models allowing you to pick and choose what you need. The public cloud consumption model is now underpinning use of GenAI. In 2024, we will see the rise of ‘chaining’, where you use several models optimised for specific tasks to power a single product or service.[…]

Read more: www.itbrief.com

Der Beitrag Five AI trends for 2024 – And How To Set Projects Up For Success erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
124335
Aurélie Jacquet https://swisscognitive.ch/person/aurelie-jacquet/ Thu, 23 Feb 2023 22:09:31 +0000 https://swisscognitive.ch/?post_type=cm-expert&p=121186 Leading global initiatives for the implementation of Responsible AI with both ISI and IEEE.

Der Beitrag Aurélie Jacquet erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Aurelie is an independent consultant who advises ASX 20 companies on the responsible implementation of AI. She also works as Principal Research Consultant on Responsible AI for CSIRO-DATA61, she is a member of the NSW Government AI Committee and the co-chair of ACS’ AI Ethics Committee.

She also leads global initiatives for the implementation of Responsible AI. To cite a few, she is

  • Chair of the standards committee representing Australia at the international standards (ISO) on AI;
  • Co-chair of the first accredited global certification program for AI developed by the Responsible AI Institute for the World Economic
    Forum; and
  • an expert on AI Classification and Risk for the OECD.ai Group

In 2021, she was also appointed by the European Commission as an expert as part of their international outreach initiative, which helps promote the EU’s vision on sustainable and trustworthy AI. Also in 2021, she won the Australia-New Zealand Women in AI and the Law award, she was recognised by Women in AI Ethics (WAIE) as one of the 100 Brilliant Women in AI Ethics globally, and the Responsible AI Institute Leadership Award.

Industry Focus: Financial services, telcos, retail

Der Beitrag Aurélie Jacquet erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
121186
How Deep Learning is Used by NASA, Zendesk, Princeton, 90 Seconds, and ESF: Case Studies https://swisscognitive.ch/2022/01/04/how-deep-learning-is-used-by-nasa-zendesk-princeton-90-seconds-and-esf-case-studies/ Tue, 04 Jan 2022 05:44:00 +0000 https://swisscognitive.ch/?p=116321 See below how several organizations in various industries are applying deep learning to deliver business outcomes:

Der Beitrag How Deep Learning is Used by NASA, Zendesk, Princeton, 90 Seconds, and ESF: Case Studies erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Organizations are using deep learning (DL), a growing branch of artificial intelligence (AI), to streamline their operations and increase productivity.

 

Copyright: datamation.com


 

See below how several organizations in various industries are applying deep learning to deliver business outcomes:

 

 

5 Deep Learning Case Studies

1. Zendesk

Zendesk is a software as a service (SaaS) provider that helps companies create strong customer relationships that facilitate productivity and growth.

As their user base grew, Zendesk needed to find a way to keep up with customers wanting to find answers to their questions as fast as possible. However, routing them to talk to a support agent isn’t scalable and would still mandate wait time. Zendesk addressed this challenge by creating Answer Bot using deep learning.

Using neural networks, Zendesk developed a virtual customer assistant that’s able to answer customer questions using content straight from the Zendesk Guide knowledge base.

“For Answer Bot, we liked the idea that a deep learning model could help the application continually fine-tune itself to give customers the best possible answers,” said Soon-Ee Cheah, a data scientist at Zendesk. “We can scale our deep learning models very efficiently using GPU-processing power on AWS, and that will benefit us while we grow our applications to accommodate more customers.”

Industry: SaaS

Deep learning solutions: Amazon Simple Storage Service (S3), Amazon EC2, Amazon Aurora, and Amazon SageMaker.

Outcomes:
Instantaneous answers to customer questions
Scalable software infrastructure to meet customer demand
Quick to train and deploy.

2. 90 Seconds

90 Seconds is a video creation platform that regularly manages 12,000 video professionals in over 160 countries. While they started as a low-profile business in New Zealand, their growth forced their hand into using more tech in their operation to keep up with the rise in demand.

By working alongside the Google Cloud Platform, they’re able to train deep learning algorithms to analyze videos and provide relevant analysis for brands. The algorithms are also able to identify and extract specific content from videos, like footage of sunsets or people, and analyze how they contribute to the performance of the video in terms of viewer count and social media engagement.

“Google Cloud Platform has played a key role in helping our business grow to this point,” said Dat Le, director of data science and engineering at 90 Seconds. “We see technologies like Cloud Vision API, Cloud Video Intelligence, and Cloud AutoML helping us become a more intelligent, valuable provider to brands in future.”

Industry: Media production

Deep learning solutions: Cloud Vision API, Cloud Video Intelligence, Kubernetes Engine, Compute Engine, Cloud SQL, BigQuery, and Cloud AutoML.

Outcomes:
Scalable solution that supports the growing demand for cloud video production
Accelerates software development
Facilitates decision-making by capturing and analyzing data from multiple services
Supports an online marketplace of 12,000 videos creative professionals and 3,000 brands […]

Read more: www.datamation.com

Der Beitrag How Deep Learning is Used by NASA, Zendesk, Princeton, 90 Seconds, and ESF: Case Studies erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
116321
Is it right to use AI to identify children at risk of harm? https://swisscognitive.ch/2019/11/19/is-it-right-to-use-ai-to-identify-children-at-risk-of-harm/ https://swisscognitive.ch/2019/11/19/is-it-right-to-use-ai-to-identify-children-at-risk-of-harm/#comments Tue, 19 Nov 2019 05:02:00 +0000 https://dev.swisscognitive.net/target/is-it-right-to-use-ai-to-identify-children-at-risk-of-harm/ Technology has advanced enormously in the 30 years since the introduction of the first Children Act , which shaped the UK’s system of…

Der Beitrag Is it right to use AI to identify children at risk of harm? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Technology has advanced enormously in the 30 years since the introduction of the first Children Act , which shaped the UK’s system of child safeguarding.

Copyright by www.theguardian.com

 

SwissCognitiveToday a computer-generated analysis – “machine learning” that produces predictive analytics – can help social workers assess the probability of a child coming on to the at-risk register. It can also help show how they might prevent that happening.

But with technological advances come dilemmas unimaginable back in 1989. Is it right for social workers to use computers to help promote the welfare of children in need? If it is right, what data should they draw on to do that?

Maris Stratulis, national director of the British Association of Social Workers England, first voiced concerns last year. She remains worried. “Machine learning in social care still raises significant issues about how we want to engage with children and families,” she says. “Reports on its use in other countries, such as New Zealand , have shown mixed results including potential unethical profiling of groups of people.”

Stratulis is also concerned at the role of profit-making companies in the new techniques. “Rather than focusing on learning from machines and algorithms, let’s focus on good, relationship-based social work practice,” she says.

Machine learning is an application of artificial intelligence (AI). Computer systems enable councils to number-crunch vast amounts of data from a variety of sources, such as police records, housing benefit files, social services, education or – where it is made available – the NHS. In children’s services, a council may ask for analysis of specific risk factors which social workers would otherwise not know, such as a family getting behind on the rent, which can then be triangulated with other data such as school attendance.

“We don’t decide what databases to trawl – the client does,” says Wajid Shafiq, chief executive officer at Xantura, a company he set up 11 years ago which has recently been working with Thurrock council and Barking and Dagenham council in east London. “And the public sector is very aware of the ethical issues.”

Most councils trialling predictive analysis are using commercial organisations to set up and run the analyses. Only one, Essex, is known to be using its own purpose-built database collection. Thurrock is working with Xantura in using data analytics to help, in the words of a council spokesperson, “better identify those most in need of help and support, and to reduce the need for statutory interventions”.

Such is the sensitivity of the issue, however, that all councils dipping their toes into the machine-learning water are at pains to stress the caution they are adopting. “It is important to emphasise that data analytics systems are only part of the process,” says the Thurrock spokesperson. “Further verification and checks are carried out in line with statutory requirements prior to any intervention.” […]

 

Read more – www.theguardian.com

Der Beitrag Is it right to use AI to identify children at risk of harm? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
https://swisscognitive.ch/2019/11/19/is-it-right-to-use-ai-to-identify-children-at-risk-of-harm/feed/ 1 68341
AI is Now at a “Disruptive” Service Level https://swisscognitive.ch/2017/04/10/ai-is-now-at-a-disruptive-service-level/ https://swisscognitive.ch/2017/04/10/ai-is-now-at-a-disruptive-service-level/#comments Mon, 10 Apr 2017 04:38:58 +0000 https://dev.swisscognitive.net/target/ai-is-now-at-a-disruptive-service-level/ Disruptive technologies copyright by techexec.com.au Disruptive technologies such as artificial intelligence and machine learning have been hot topics in recent years. But how…

Der Beitrag AI is Now at a “Disruptive” Service Level erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Disruptive technologies

copyright by techexec.com.au

Disruptive technologies such as artificial intelligence and machine learning have been hot topics in recent years. But how well are these technologies understood and adopted by organisations?

Richard Busby, the Principal Solution Architect at Amazon Web Services, discussed how the adoption of these technologies can be more easily achieved by organisations at CxO Disrupt in New Zealand in March.

Drawing on Amazon’s technologies such as Echo, Alexa and Polly, Busby highlighted how technology should be a key part of your business strategy.

The Two Types of Disruption

Before exploring the power of AI Busby started by explaining that their were two definitions, or “types”, of disruption.

“The first is a product to commodity substitution…

That’s a type of disruptive innovation that you should see coming,” Busby explained.

To illuminate the theory he used the common PC as a reference point. While the purchase and assembly of computers were complex 30 years ago, they have now become a utility and are therefore more readily available.

“The second type of disruption is this product to product substitution, and that’s where someone comes into the market with something that’s completely left field that you’ve never seen before and you couldn’t predict. These are the types of disruption that can up-end entire industries and turn incumbent providers on their heads.”

This second definition is what organisations fear and try to prevent happening to them.Busby also drew on Clayton Christensen’s theory that although one organisation may create a mature, feature-complete technology, the price and complexity consequently rises too. This provides other organisations with an opportunity to create a more cost-effective alternative.This is “good enough for a segment of the market so they start using that because it’s cheaper, innovative and it’s faster. So the disruptive technology is…where a new challenger in the market can offer something that’s very basic at the […]

read more – copyright by techexec.com.au

Der Beitrag AI is Now at a “Disruptive” Service Level erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
https://swisscognitive.ch/2017/04/10/ai-is-now-at-a-disruptive-service-level/feed/ 6 6465
The Robot Revolution – Why AI Matters for Marketing? https://swisscognitive.ch/2017/01/31/why-ai-matters-for-marketing-robot/ Tue, 31 Jan 2017 06:54:49 +0000 https://dev.swisscognitive.net/target/why-artificial-intelligence-matters-for-marketing/ The Robot Revolution Why Marketers Must Prepare for the Rise of AI 1. Why Artificial Intelligence Matters for Marketing 2. You’ve probably heard…

Der Beitrag The Robot Revolution – Why AI Matters for Marketing? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The Robot Revolution

Why Marketers Must Prepare for the Rise of AI

1. Why Artificial Intelligence Matters for Marketing

2. You’ve probably heard the buzz …

3. The age of artificial intelligence has arrived.

4. Well, not robots and flying cars exactly.

5. is an area of computer science that makes machines do things that would require intelligence if done by a human. This includes tasks like learning, seeing, talking, socializing, reasoning, or problem solving. Artificial intelligence

6. Why Marketers Must Prepare for the Rise of ARTIFICIAL INTELLIGENCE EXPERIENCE THE FULL STORY

SwissCognitive Logo7. AI is designed to flow seamlessly into the tools you already use. But that also makes it a little hard to recognize …

8. And it turns out, 63% are already using AI tools without realizing it.

9. One of the most popular applications for AI is voice search, which uses natural language processing.

10. Nov 2016 May 2016 WeeklyOnce a dayMultiple times a day 10% 27% 9% 20% 9% 38% How frequently do you use voice-enabled search engines a week? November Base: 1,051 consumers in the US, UK, Ireland, Germany, Mexico, and Colombia who have used voice search within the past month May Base: 1275 consumers in the US, Canada, UKI, Germany, Australia, New Zealand, Singapore, Colombia, Mexico, and Brazil Source: HubSpot Global AI Survey, Q4 2016 38% use voice search weekly … and adoption is rising.

11. SEO professionals will need to learn how people use voice search to find content, not just long-tail keywords typed into Google. What does this mean for marketers?

12. are text-based applications that humans communicate with to automate specific actions or seek information. They generally live natively inside a messaging app, such as Slack, WhatsApp, or Facebook Messenger. Bots

13. Sources: Whatsapp (MAU), Facebook (MAU), Tencent (QQ and WeChat MAU), Venturebeat (*Kik users), TechinAsia.com (Viber and Line MAU), Telegram (MAU), […]

Der Beitrag The Robot Revolution – Why AI Matters for Marketing? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
2531
How to Tackle Artificial Intelligence Law and Policy https://swisscognitive.ch/2017/01/28/artificial-intelligence-law/ https://swisscognitive.ch/2017/01/28/artificial-intelligence-law/#comments Sat, 28 Jan 2017 13:52:30 +0000 https://dev.swisscognitive.net/target/new-study-to-tackle-artificial-intelligence-law-and-policy/ Major new study to tackle artificial intelligence law and policy in New Zealand Artificial Intelligence (AI) is coming at us before we fully…

Der Beitrag How to Tackle Artificial Intelligence Law and Policy erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Major new study to tackle artificial intelligence law and policy in New Zealand

Artificial Intelligence (AI) is coming at us before we fully understand what it might means. Established ways of doing things in areas like transport regulation, crime prevention and legal practice are being challenged by new technologies such as driverless cars, crime prediction software and “AI lawyers”. The possible implications of AI innovations for law and public policy in New Zealand will be teased out in a new, ground-breaking Law Foundation study. The three-year multi-disciplinary project, supported by a $400,000 Law Foundation grant, is being run out of Otago University. Project team leader Dr Colin Gavaghan says that AI technologies – essentially, technologies that can learn and adapt for themselves – pose fascinating legal, practical and ethical challenges.

SwissCognitive LogoLegal challenges already exist

A current example is PredPol, the technology now widely used by Police in American cities to predict where and when crime is most likely to occur. PredPol has been accused of reinforcing bad practices such as racially-biased policing. Some US courts are also using predictive software when making judgments about likely reoffending. “Predictions about dangerousness and risk are important, and it makes sense that they are as accurate as possible,” Colin says. “But there are possible downsides – AI technologies have a veneer of objectivity, because people think machines can’t be biased, but their parameters are set by humans. This could result in biases being overlooked or even reinforced. “Also, because those parameters are often kept secret for commercial or other reasons, it can be hard to assess the basis for some AI-based decisions. This ‘inscrutability’ might make it harder to challenge those decisions, in the way we might challenge a decision made by a judge or a police officer.”

Autonomous cars and AI lawyers

Another example is the debate over how driverless cars should make choices in life- threatening situations. Recently, Mercedes announced that it will programme its cars to prioritise car occupants over pedestrians when an accident is imminent. Moreover, law firms have ‘hired’ AI lawyers which raises question such as: “Is the replacement of a human lawyer by an AI lawyer more like making the lawyer redundant, or more like replacing one lawyer with another one? Some professions – lawyers, doctors, teachers – also have ethical and pastoral obligations. Are we confident that an AI worker will be able to perform those roles?” […]

Der Beitrag How to Tackle Artificial Intelligence Law and Policy erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
https://swisscognitive.ch/2017/01/28/artificial-intelligence-law/feed/ 2 2015