Media & Marketing Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/industry_sector/media-marketing/ SwissCognitive | AI Ventures, Advisory & Research, committed to Unleashing AI in Business Mon, 31 Mar 2025 08:30:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://i0.wp.com/swisscognitive.ch/wp-content/uploads/2021/11/cropped-SwissCognitive_favicon_2021.png?fit=32%2C32&ssl=1 Media & Marketing Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/industry_sector/media-marketing/ 32 32 163052516 Fortifying the Future: Ensuring Secure and Reliable AI https://swisscognitive.ch/2025/04/01/fortifying-the-future-ensuring-secure-and-reliable-ai/ Tue, 01 Apr 2025 03:44:00 +0000 https://swisscognitive.ch/?p=127360 Ensuring AI resilience and security is becoming essential as systems grow in influence and exposure to manipulation and attack.

Der Beitrag Fortifying the Future: Ensuring Secure and Reliable AI erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
AI systems, while offering immense potential, are also vulnerable to attacks and data manipulation. From the digital to the physical, it is crucial to integrate security and reliability into the development and deployment of AI. From AI sovereignty to attack and failure training, AI of the future will become a matter of national security.

 

SwissCognitive Guest Blogger: Eleanor Wright, COO at TelWAI – “Fortifying the Future: Ensuring Secure and Reliable AI”


 

SwissCognitive_Logo_RGBAs AI becomes further integrated into various domains, from infrastructure to defence, ensuring its robustness will become a matter of national security. An AI system managing power grids, security apparatus, or financial networks could present a single point of failure if compromised or manipulated. Historical incidents, such as the Stuxnet cyberweapon, illustrate the physical and cyber damage that can be inflicted. When considering AI’s complexity, the potential for a cascade of both physical and digital harm increases dramatically.

As such, we should ask: How do we fortify AI?

AI systems must be designed to withstand attacks. From decentralisation to layering, these systems should be constructed so that control points can seamlessly enter and exit the loop without disabling the broader system. Thus, building redundancy and backup at various control points within the AI systems. For example, suppose a sensor or a group of sensors is deemed to have failed or been corrupted. In that case, the broader system must be capable of automatically readjusting to stop utilising data and intelligence gathered from said sensors.

Another strategy for strengthening AI systems involves simulating data poisoning attacks and training AI systems to detect such threats. By teaching the systems to recognise and respond to attacks or failures, they can automatically reconfigure without the need for human intervention. If an AI can learn to identify tainted data, such as statistical anomalies or inconsistent patterns, it could flag or quarantine suspect inputs. This approach leans heavily on machine learning’s strengths: pattern recognition and adaptability. However, it’s not a failsafe; adversaries could evolve their attacks to more closely mimic legitimate data, so the training would need to be dynamic, constantly updating to match new threat profiles.

Maintaining a human in the loop to enable oversight and override is considered one of the most crucial elements in the rollout of AI in various industries. Allowing humans to oversee AI decision-making and restricting autonomy can prevent potentially harmful actions taken by these systems. Whilst critical in the early stages of AI deployment as capabilities scale and evolve, there may come a point where human oversight inhibits these systems and, in itself, causes more harm than good.

Finally, AI sovereignty may prove to be the most critical element in ensuring companies and governments fully control essential algorithms and hardware powering their operations. Without this control, these systems could be vulnerable to foreign interference, including cyberattacks, espionage, or sabotage. As the use of AI increases, the sovereignty of AI systems and their components will become increasingly important. At its core, AI sovereignty is about control, whether exercised by governments, corporations, or individuals. Through the control of data, infrastructure, and decision-making power, those who build and deploy AI systems and sensors gain control of AI.

Fortification will involve integrating resilience, adaptability, and sovereignty into AI’s DNA, ensuring it is not only intelligent but also resilient and unbreakable. It can provide technological advantages, but it may also expose systems to disruption and vulnerability exploitation. As organisations race to harness AI’s potential, the question looms: Will AI enable organisations to gain a strategic advantage, or will it undermine the very systems it was designed to strengthen?


About the Author:

Holding a BA in Marketing and an MSc in Business Management, Eleanor Wright has over eleven years of experience working in the surveillance sector across multiple business roles.

Der Beitrag Fortifying the Future: Ensuring Secure and Reliable AI erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127360
A New Era of Intelligent Robots – AI and Robotics https://swisscognitive.ch/2025/03/11/a-new-era-of-intelligent-robots-ai-and-robotics/ Tue, 11 Mar 2025 04:44:00 +0000 https://swisscognitive.ch/?p=127317 AI and robotics are evolving, making machines more adaptive and efficient while raising new challenges for integration into society.

Der Beitrag A New Era of Intelligent Robots – AI and Robotics erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The fusion of AI and Robotics is poised to transform society, enabling tasks beyond humanity’s physical and cognitive limitations. From automation to national defence, the application of AI to robotics will allow machines to adapt to situations, autonomously perform complex tasks, and enable smarter environments, but it will also raise ethical and societal concerns.

 

SwissCognitive Guest Blogger: Eleanor Wright, COO at TelWAI – “A New Era of Intelligent Robots”


 

SwissCognitive_Logo_RGBImagine a world where humanoid robots cook for you, care for your loved ones, and streamline your workday – all powered by AI smarter than ever before. The global AI in robotics market, projected to surpass $124 Billion by 2030, is set to make this vision a reality. As the capabilities of AI evolve, these machines will become our companions, caregivers, and coworkers, they’ll make mobility more affordable, transform access to services, and redefine the value of human effort.

From Amazon’s fleet of 750,000 warehouse robots to Tesla’s ambitions to build 10,000 humanoid Optimus robots this year, the age of robots is upon us. Dependent on sensors and actuation systems to navigate and interact with the physical environment, this new age of robotics hinges on the developments of AI, designed to mimic and learn from its biological makers. Equipping these robots with intelligence, engineers working across various domains of expertise, utilise AI to enable vision, natural language processing, sound processing, pressure sensing, and more.

Beyond sensing, AI also enables robots to reason, adapt, and learn, using approaches including—but not limited to—reinforcement learning, neural networks, and Bayesian networks. These models and methods enable robots to assess risks and determine actions, and by learning from experience, robots can adapt to new tasks and environments. Thus, AI enables robots to perceive, act, learn, and adapt, allowing them to perform tasks with greater autonomy and precision.

However, integrating AI into robotics isn’t seamless, it comes with hurdles. Robots struggle with real-time processing delays, adapting to messy unpredictable environments, squeezing efficiency from limited hardware, and understanding human quirks like vague commands or gestures. These challenges constrain capabilities and the pace at which robots enter and dominate markets.

So, how can these challenges be addressed?

Some developments in addressing these challenges include:

1. Parallel computing

Parallel computing involves dividing larger tasks into smaller, independent tasks that can be processed simultaneously rather than sequentially. This enables increased computational efficiency, reduced latency, and improved cost efficiency. In robotics, parallel computing allows robots to process inputs from LIDAR, radar, and cameras simultaneously, enabling them to navigate environments more effectively and efficiently.

2. Transfer learning

Transfer learning leverages pre-trained models to solve new, but similar, problems. In this approach, a model trained on one task or dataset is reused and fine-tuned for a related task. For example, in machine vision for defect detection in manufacturing, fine-tuning a pre-trained model on a smaller dataset of images allows it to quickly adapt to detect specific defects, such as cracks or dents, without needing to train a model from scratch.

3. Self-calibrating AI

Self-calibrating refers to AI systems that autonomously adjust their parameters, models, or processes to maintain optimal performance without manual intervention. In robotics, self-calibrating AI enables robots to adapt to changes in their environment, hardware, or tasks, ensuring they operate with optimized accuracy and efficiency over time.

4. Federated learning

Federated learning is a technique that enables AI systems to learn from distributed data sources whilst ensuring privacy and security. It allows AI to collaboratively train a shared model without transferring sensitive data, preserving privacy and reducing reliance on centralised storage. For example, delivery robots use federated learning to optimise pathfinding without sending raw data, such as sensor inputs or location, to a central server. Instead, they locally update their models and share improvements, preserving both privacy and security.

These developments indicate a key focus on efficiency, adaptability, and learning – all of which are essential for the continued evolution of robotics in complex, real-world environments. Additionally, these advancements contribute to a future where robots collaborate with humans, leveraging their ability to learn from experience and improve over time.

So, what’s next for AI in Robotics?

Just as AI agents are taking over the digital realm, they are about to flood robotics too. AI agents embedded in robotics will supercharge the autonomy and flexibility of robots, enabling them to communicate with humans and even interpret intentions by analysing gestures and potentially emotional cues. Crucial to human-robot interactions, AI agents may prove highly effective in assisted care, hospitality, and other service industries.

Additionally, as technologies like federated learning and edge computing evolve, robots will share knowledge without compromising privacy or relying on centralised data. This will improve scalability and efficiency by reducing the need for costly centralised storage and processing, and enable additional robots to integrate rapidly into existing networks.

So, where does this leave us?

Although there are abundant market opportunities for AI in robotics, the pace at which different markets adopt robotics will vary; with AI being a key factor driving this adoption. Crucial for overcoming challenges related to autonomy, adaptability, and decision-making, AI will empower robots to perform tasks once considered too complex or risky for automation. As AI continues to evolve, it will not only raise important concerns about safety, ethics, and integration but help address them; ensuring robots can work seamlessly alongside humans and contribute to a more productive future.


About the Author:

Holding a BA in Marketing and an MSc in Business Management, Eleanor Wright has over eleven years of experience working in the surveillance sector across multiple business roles.

Der Beitrag A New Era of Intelligent Robots – AI and Robotics erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127317
$100B for AI Chips, $40B for AI Bets – SwissCognitive AI Investment Radar https://swisscognitive.ch/2025/03/06/100b_for_ai_chips_40b_for_ai_bets-swisscognitive-ai-investment-radar/ Thu, 06 Mar 2025 04:44:00 +0000 https://swisscognitive.ch/?p=127299 AI bets are reshaping industries, with billions going into AI chips and AI investments across finance, media, and cloud technology.

Der Beitrag $100B for AI Chips, $40B for AI Bets – SwissCognitive AI Investment Radar erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Massive AI bets are reshaping industries, with $100 billion going into AI chips and $40 billion fueling AI investments across finance, media, and cloud technology.

 

$100B for AI Chips, $40B for AI Bets – SwissCognitive AI Investment Radar


 

SwissCognitive_Logo_RGB

AI investment shows no signs of slowing, with capital flowing across semiconductors, cloud AI, financial AI, and responsible AI initiatives. This week, TSMC is preparing a staggering $100 billion investment in U.S. chip production, reinforcing the U.S. AI supply chain. Meanwhile, Anthropic’s valuation tripled to $61.5 billion, after securing $3.5 billion in funding to keep pace with OpenAI and DeepSeek.

The private sector’s AI appetite remains insatiable. Blackstone’s Jonathan Gray emphasized AI’s dominance in global investment trends, while Guggenheim and billionaire investors assembled a $40 billion AI investment pool to fuel finance, sports, and media innovation. Meanwhile, Canva’s AI report revealed that 94% of marketers have now integrated AI into their operations, marking a fundamental shift in business strategy.

The global AI race is also drawing government interest. The European Commission announced a €200 billion mobilization for AI investments, alongside France’s €109 billion push, as President Macron aims to position Europe as a heavyweight in AI development. Across the globe, China’s Honor pledged $10 billion to AI investment, deepening ties with Google for a global expansion.

The infrastructure for AI applications continues to scale rapidly. DoiT announced a $250 million fund dedicated to AI-driven cloud operations, while Shinhan Securities backed Lambda Labs with a $9.3 million investment to advance NVIDIA GPU-powered AI cloud services. Meanwhile, Accenture is doubling down on AI decision intelligence, backing Aaru to improve AI-powered behavioral simulations.

Beyond the corporate sphere, responsible AI investments are gaining traction. Chinese firms are increasing spending on ethical AI as part of a broader strategy to align AI governance with innovation. Meanwhile, Blackstone committed $300 million to AI-driven Insurtech, supporting AI-powered safety solutions in insurance.

With tech giants, startups, and governments all placing massive bets on AI, the sector’s financial landscape is evolving faster than ever. Investors are watching closely as AI’s long-term ROI takes center stage.

How will the capital influx shape AI’s next phase? The coming months will bring more answers.

Previous SwissCognitive AI Radar: AI Expansion and This Week’s Top Investments.

Our article does not offer financial advice and should not be considered a recommendation to engage in any securities or products. Investments carry the risk of decreasing in value, and investors may potentially lose a portion or all of their investment. Past performance should not be relied upon as an indicator of future results.

Der Beitrag $100B for AI Chips, $40B for AI Bets – SwissCognitive AI Investment Radar erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127299
Navigating the Adoption of AI by the Public Sector https://swisscognitive.ch/2025/02/18/navigating-the-adoption-of-ai-by-the-public-sector/ Tue, 18 Feb 2025 04:44:00 +0000 https://swisscognitive.ch/?p=127213 Artificial Intelligence (AI), its impact in public sector, and the business models underpinning its procurement.

Der Beitrag Navigating the Adoption of AI by the Public Sector erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
AI, its impact on public services, and the business models underpinning its procurement.

 

SwissCognitive Guest Blogger: Eleanor Wright – “Navigating the Adoption of AI by the Public Sector”


 

SwissCognitive_Logo_RGBPerfectly positioned to transform government efficiency and public services, governments globally are investing heavily in AI. From the UK’s plan to ramp up AI adoption to the Emirati investment in project Stargate, no government wants to be left behind.

AI however has more to offer governments than transforming public services, and government contracts will accelerate AI companies to industry dominance.

The public sector adoption of AI will require infrastructure, expertise, and a risk appetite. Data centers will be built, and vast amounts of energy will be used. Beyond the financial and material investment, engineers will be needed to code and develop these systems, and government expertise will be required to procure and integrate AI into antiquated legacy systems.

AI, however, has more to offer governments than transforming public services, and governments have the power to transform the business of AI. By gatekeeping access to data and procuring long-term contracts, public sector contracts can rapidly accelerate AI companies into big businesses and deliver the capital needed to beat out the competition, enabling a new wave of incumbents.

This model of public sector procurement from the private sector, however, may not be in the best interest of the citizens and taxpayers who will ultimately fund these large contracts. As AI efficiency and capabilities develop and public sector jobs are replaced, the greater the dependency will be on these companies to maintain critical public services. Thus, it is fair to assume that a critical point will be reached where these companies become too big to fail. If public services become reliant on the capabilities and services of a handful of providers, the balance of power will shift.

This dependency however should not discourage the adoption of AI by the public sector, but shape how contracts are procured and the business model underpinning them. Whether it be public-private partnerships, state-owned or implementing a cooperative structure, the business models underlying the roll-out of AI into the public sector could determine how AI is procured and implemented.

Whilst state-owned assets or companies can be inefficient, open to political interference, and lack a drive for innovation, they offer public-focused interest. Capital saved can be reinvested into the impact of public services and jobs that will have been outsourced to the private sector can be internally generated.

In the same way, state-owned companies operate in the interest of the public, public-private partnerships and cooperative companies may represent a strong middle ground between purely public or privately sourced contracts. Public-private partnerships will limit the amount of control private companies exert, and cooperative companies could enable the development and procurement of AI systems that meet a common economic and social goal.

It should be noted however that neither public-private partnerships nor cooperatives are fully resilient against political or private interference. Decisionmakers will always be susceptible to desiring increased control and securing financial gain.

Finally, another alternative may be to implement an open-source procurement model. By procuring solely from companies utilising open-sourced base models, public service contracts built on open-source models could help mitigate incumbency dominance and level the playing field. These base models could even use university knowledge and expertise to drive and maintain innovation.

No matter how public service agencies and providers choose to procure and maintain AI contracts, the business model underpinning the procurement both internally and externally will heavily shape the future of AI. A carefully thought-out business model could provide a strategic advantage and deliver greater value to stakeholders.


About the Author:

Holding a BA in Marketing and an MSc in Business Management, Eleanor Wright has over eleven years of experience working in the surveillance sector across multiple business roles

Der Beitrag Navigating the Adoption of AI by the Public Sector erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127213
AI for Transformative Enterprise Growth: Insights from a Principal Engineer https://swisscognitive.ch/2025/02/11/ai-for-transformative-enterprise-growth-insights-from-a-principal-engineer/ Tue, 11 Feb 2025 09:27:52 +0000 https://swisscognitive.ch/?p=127207 AI is driving enterprise growth by enabling smarter decision-making, optimizing operations, and transforming customer engagement.

Der Beitrag AI for Transformative Enterprise Growth: Insights from a Principal Engineer erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
AI is driving enterprise growth by enabling smarter decision-making, optimizing operations, and transforming customer engagement.

 

SwissCognitive Guest Blogger: Dileep Kumar Pandiya – “AI for Transformative Enterprise Growth: Insights from a Principal Engineer”


 

SwissCognitive_Logo_RGBYou know, it’s amazing to think about. Imagine your sales team closing deals twice as fast. Or your supply chain just adapting on the spot when the market shifts. Honestly, it’s not something from the future—it’s happening now, all thanks to AI.

I have been working in tech for almost 18 years, and I’ve seen how these tools turn ambitious ideas into actual results. I want to show you what that looks like in real life—where AI didn’t just help businesses grow, it completely changed the game.

How AI Unlocks Growth in Enterprises

What if your business could predict customer needs before they even knew them? AI makes this possible. It’s no longer about guesswork or reacting late; it’s about proactive strategies powered by data.
Take a retail chain struggling with overstock issues. By implementing AI to forecast demand using real-time trends, they reduced inventory waste by 20% and increased availability of high-demand items by 15%. It’s a transformation that goes beyond efficiency—it’s about building smarter, more agile businesses.

AI Copilot: Redefining Sales with AI

Sales has always been about timing and relationships. But what if AI could help you focus on the right opportunities at exactly the right moment? That’s the promise of AI Copilot.
When we launched Copilot, the goal was simple: empower sales teams to act smarter and faster. By integrating AI, I built a platform that could analyze millions of data points in seconds to identify high-potential accounts. The result: Sales teams were no longer overwhelmed by data they were driven by insights.
Here’s what stood out most to me: within three months, Copilot wasn’t just saving time—it was generating millions in additional revenue. Seeing the tangible impact on businesses and hearing feedback like “I can’t imagine working without this” made every late night worth it.

Scaling Smarter with AI and Microservices

Think of a system that can process thousands of real-time events every second, with no downtime. That’s what we built with the Phoenix Project, a scalable platform that uses AI and microservices to empower B2B clients.
One client used this platform to optimize marketing campaigns dynamically. Instead of waiting weeks for data analysis, they could adjust strategies on the fly, improving lead quality by 30% and cutting acquisition costs dramatically. It’s proof that scalability isn’t just a technical goal—it’s a business imperative.

Lessons for Enterprises Ready to Embrace AI

Here’s a story I often share: A small business hesitant to invest in AI started with a single pilot project—automating customer inquiries with AI chatbots. Within six months, they expanded the system to handle order tracking, inventory checks, and even personalized product recommendations. Today, they credit AI for a 25% increase in customer retention.
My takeaway is to start small, but think big. AI’s value compounds over time, so even small steps can lead to significant transformations.

Future Trends in AI and Enterprise Growth

The future isn’t just about doing things faster—it’s about doing them smarter. Imagine systems that can explain their decisions clearly or tools that work alongside humans to tackle complex problems.
One trend I’m particularly excited about is real-time decision-making. For example, picture a global logistics company rerouting shipments during a storm, avoiding delays and cutting costs. This kind of agility is becoming the new standard, and businesses that embrace it early will set themselves apart.

Final Thoughts

AI is the foundation for building the future of business. Whether it’s transforming sales strategies, driving efficiency, or enabling agility, the opportunities are immense. My advice: Don’t wait for the perfect moment to start. Take a step, learn, and grow with AI.


About the Author:

AI for Transformative Enterprise Growth: Insights from a Principal EngineerDileep Kumar Pandiya is a globally recognized Principal Engineer with over 18 years of groundbreaking work in AI and enterprise technology. He has pioneered transformative AI-driven platforms and scalable systems, driving innovation for Fortune 500 companies like ZoomInfo, Walmart, and IBM. His leadership has redefined sales technology and digital transformation, earning him prestigious awards and international acclaim for his contributions to business growth and industry advancement. Known for his ability to blend visionary thinking with practical solutions, Dileep continues to shape the future of enterprise technology.

Der Beitrag AI for Transformative Enterprise Growth: Insights from a Principal Engineer erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127207
What Happens When AI Commodifies Emotions? https://swisscognitive.ch/2025/01/14/what-happens-when-ai-commodifies-emotions/ Tue, 14 Jan 2025 04:44:00 +0000 https://swisscognitive.ch/?p=127041 The latest AI developments might turn empathy into just another product for sale, raising questions about ethics and regulation.

Der Beitrag What Happens When AI Commodifies Emotions? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The latest AI developments turn empathy into just another product for sale, raising questions about ethics and regulation.

 

SwissCognitive Guest Blogger:  HennyGe Wichers, PhD – “What Happens When AI Commodifies Emotions?”


 

SwissCognitive_Logo_RGBImagine your customer service chatbot isn’t just solving your problem – it’s listening, empathising, and sounding eerily human. It feels like it cares. But behind the friendly tone and comforting words, that ‘care’ is just a product, finetuned to steer your emotions and shape your decisions. Welcome to the unsettling reality of empathetic AI, where emotions and mimicked – and monetised.

In 2024, empathetic AI took a leap forward. Hume.AI gave large language models voices that sound convincingly expressive and a perceptive ear to match. Microsoft’s Copilot got a human voice and an emotionally supportive attitude, while platforms like Character.ai and Psychologist sprouted bots that mimic therapy sessions. These developments are paving the way for a new industry: Empathy-as-a-Service, where emotional connection isn’t just simulated, it’s a product: packaged, scaled, and sold.

This is not just about convenience – but about influence. Empathy-as-a-Service (EaaS), an entirely hypothetical but now plausible product, could blur the line between genuine connection and algorithmic mimicry, creating systems where simulated care subtly nudges consumer behaviour. The stakes? A future where businesses profit from your emotions under the guise of customer experience. And for consumers on the receiving end, that raises some deeply unsettling questions.

A Hypothetical But Troubling Scenario

Take an imaginary customer service bot. One that helps you find your perfect style and fit – and also tracks your moods and emotional triggers. Each conversation teaches it a little more about how to nudge your behaviour, guiding your decisions while sounding empathetic. What feels like exceptional service is, in reality, a calculated strategy to lock in your loyalty by exploiting your emotional patterns.

Traditional loyalty programs, like the supermarket club card or rewards card, pale in comparison. By analysing preferences, moods, and triggers, empathetic AI digs into the most personal corners of human behaviour. For businesses, it’s a goldmine; for consumers, it’s a minefield. And it raises a new set of ethical questions about manipulation, regulation, and consent.

The Legal Loopholes

Under the General Data Protection Regulation (GDPR), consumer preferences are classified as personal data, not sensitive data. That distinction matters. While GDPR requires businesses to handle personal data transparently and lawfully, it doesn’t extend the stricter protections reserved for health, religious beliefs, or other special categories of information. This leaves businesses free to mine consumer preferences in ways that feel strikingly personal – and surprisingly unregulated.

The EU AI Act, introduced in mid-2024, goes one step further, requiring companies to disclose when users are interacting with AI. But disclosure is just the beginning. The AI Act doesn’t touch using behavioural data or mimicking emotional connection. Joanna Bryson, Professor of Ethics & Technology at the Hertie School, noted in a recent exchange: “It’s actually the law in the EU under the AI Act that people understand when they are interacting with AI. I hope that might extend to mandating reduced anthropomorphism, but it would take some time and court cases.”

Anthropomorphism, the tendency to project human qualities onto non-humans, is ingrained in human nature. Simply stating that you’re interacting with an AI doesn’t stop it. The problem is that it can lull users into a false sense of trust, making them more vulnerable to manipulation.

Empathy-as-a-Service could transform customer experiences, making interactions smoother, more engaging, and hyper-personalised. But there’s a cost. Social media already showed us what happens when human interaction becomes a commodity – and empathetic AI could take that even further. This technology could go beyond monetising attention to monetising emotions in deeply personal and private ways.

A Question of Values

As empathetic AI becomes mainstream, we have to ask: are we ready for a world where emotions are just another digital service – scaled, rented, and monetised? Regulation like the EU AI Act is a step in the right direction, but it will need to evolve fast to keep pace with the sophistication of these systems and the societal boundaries they’re starting to push.

The future of empathetic AI isn’t just a question of technological progress – it’s a question of values. What kind of society do we want to build? As we stand on the edge of this new frontier, the decisions we make today will define how empathy is shaped, and sold, in the age of AI.


About the Author:

HennyGe Wichers is a technology science writer and reporter. For her PhD, she researched misinformation in social networks. She now writes more broadly about artificial intelligence and its social impacts.

Der Beitrag What Happens When AI Commodifies Emotions? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127041
How Countries Are Using AI to Predict Crime https://swisscognitive.ch/2024/12/23/how-countries-are-using-ai-to-predict-crime/ Mon, 23 Dec 2024 10:53:39 +0000 https://swisscognitive.ch/?p=126927 To predict future crimes seems like something from a sci-fi novel — but already, countries are using AI to forecast misconduct.

Der Beitrag How Countries Are Using AI to Predict Crime erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Countries aren’t only using AI to organize quick responses to crime — they’re also using it to predict crime. The United States and South Africa have AI crime prediction tools in development, while Japan, Argentina, and South Korea have already introduced this technology into their policing. Here’s what it looks like.

 

SwissCognitive Guest Blogger: Zachary Amos – “How Countries Are Using AI to Predict Crime”


 

A world where police departments can predict when, where and how crimes will occur seems like something from a science fiction novel. Thanks to artificial intelligence, it has become a reality. Already, countries are using this technology to forecast misconduct.

How Do AI-Powered Crime Prediction Systems Work?

Unlike regular prediction systems — which typically use hot spots to determine where and when future misconduct will be committed — AI can analyze information in real time. It may even be able to complete supplementary tasks like summarizing a 911 call, assigning a severity level to a crime in progress or using surveillance systems to tell where wanted criminals will be.

A machine learning model evolves as it processes new information. Initially, it might train to find hidden patterns in arrest records, police reports, criminal complaints or 911 calls. It may analyze the perpetrator’s demographic data or factor in the weather. The goal is to identify any common variable that humans are overlooking.

Whether the algorithm monitors surveillance camera footage or pours through arrest records, it compares historical and current data to make forecasts. For example, it may consider a person suspicious if they cover their face and wear baggy clothes on a warm night in a dark neighborhood because previous arrests match that profile.

Countries Are Developing AI Tools to Predict Crime

While these countries don’t currently have official AI prediction tools, various research groups and private police forces are developing solutions.

  • United States

Violent and property crimes are huge issues in the United States. For reference, a burglary occurs every 13 seconds — almost five times per minute — causing an average of $2,200 in losses. Various state and local governments are experimenting with AI to minimize events like these.

One such machine learning model developed by data scientists from the University of Chicago uses publicly available information to produce output. It can forecast crime with approximately 90% accuracy up to one week in advance.

While the data came from eight major U.S. cities, it centered around Chicago. Unlike similar tools, this AI model didn’t depict misdemeanors and felonies as hot spots on a flat map. Instead, it considered cities’ complex layouts and social environments, including bus lines, street lights and walkways. It found hidden patterns using these previously overlooked factors.

  • South Africa

Human trafficking is a massive problem in South Africa. For a time, one anti-human trafficking non-governmental organization was operating at one of the country’s busiest airports. After the group uncovered widespread corruption, their security clearance was revoked.

At this point, the group needed to lower its costs from $300 per intercept to $50 to align with funding and continue their efforts. Its members believed adopting AI would allow them to do that. With the right data, they could save more victims while keeping costs down.

Some Are Already Using AI Tools to Predict Crime

Governments have much more power, funding and data than nongovernmental organizations or research groups, so their solutions are more comprehensive.

  • Japan

Japan has an AI-powered app called Crime Nabi. The tool — created by the startup Singular Perturbations Inc. — is at least 50% more effective than conventional methods. Local governments will use it for preventive patrols.

Once a police officer enters their destination in the app, it provides an efficient route that takes them through high-crime areas nearby. The system can update if they get directed elsewhere by emergency dispatch. By increasing their presence in dangerous neighborhoods, police officers actively discourage wrongdoing. Each patrol’s data is saved to improve future predictions.

Despite using massive amounts of demographic, location, weather and arrest data — which would normally be expensive and incredibly time-consuming — Crime Nabi processes faster than conventional computers at a lower cost.

  • Argentina

Argentina’s Ministry of Security recently announced the Artificial Intelligence Applied to Security Unit, which will use a machine learning model to make forecasts. It will analyze historical data, scan social media, deploy facial recognition technology and process surveillance footage.

This AI-powered unit aims to catch wanted persons and identify suspicious activity. It will help streamline prevention and detection to accelerate investigation and prosecution. The Ministry of Security seeks to enable a faster and more precise police response.

  • South Korea

A Korean research team from the Electronics and Telecommunications Research Institute developed an AI they call Dejaview. It analyzes closed-circuit television (CCTV) footage in real time and assesses statistics to detect signs of potential offenses.

Dejaview was designed for surveillance — algorithms can process enormous amounts of data extremely quickly, so this is a common use case. Now, its main job is to measure risk factors to forecast illegal activity.

The researchers will work with Korean police forces and local governments to tailor Dejaview for specific use cases or affected areas. It will mainly be integrated into CCTV systems to detect suspicious activity.

Is Using AI to Stop Crime Before It Occurs a Good Idea?

So-called predictive policing has its challenges. Critics like the National Association for the Advancement of Colored People argue it could increase racial biases in law enforcement, disproportionately affecting Black communities.

That said, using AI to uncover hidden patterns in arrest and police response records could reveal bias. Policy-makers could use these insights to address the root cause of systemic prejudice, ensuring fairness in the future.

Either way, there are still significant, unaddressed concerns about privacy. Various activists and human rights organizations say having a government-funded AI scan social media and monitor security cameras infringes on freedom.

What happens if this technology falls into the wrong hands? Will a corrupt leader use it to go after their political rivals or journalists who write unfavorable articles about them? Could a hacker sell petabytes of confidential crime data on the dark web?

Will More Countries Adopt These Predictive Solutions?

More countries will likely soon develop AI-powered prediction tools. The cat is out of the bag, so to speak. Whether they create apps exclusively for police officers or integrate a machine learning model into surveillance systems, this technology is here to stay and will likely continue to evolve.


About the Author:

Zachary AmosZachary Amos is the Features Editor at ReHack, where he writes about artificial intelligence, cybersecurity and other technology-related topics.

Der Beitrag How Countries Are Using AI to Predict Crime erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126927
Cost, Security, and Flexibility: the Business Case for Open Source Gen AI https://swisscognitive.ch/2024/12/18/cost-security-and-flexibility-the-business-case-for-open-source-gen-ai/ Wed, 18 Dec 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126901 Businesses are turning to open source Gen AI for flexibility, security, and cost control, balancing it with commercial models.

Der Beitrag Cost, Security, and Flexibility: the Business Case for Open Source Gen AI erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Commercial generative AI platforms like OpenAI and Anthropic get all the attention, but open-source alternatives can offer cost benefits, security, and flexibility.

 

Copyright: cio.com – “Cost, security, and flexibility: the business case for open source gen AI”


 

Travel and expense management company Emburse saw multiple opportunities where it could benefit from gen AI. It could be used to improve the experience for individual users, for example, with smarter analysis of receipts, or help corporate clients by spotting instances of fraud.

Take for example the simple job of reading a receipt and accurately classifying the expenses. Since receipts can look very different, this can be tricky to do automatically. To solve the problem, the company turned to gen AI and decided to use both commercial and open source models. Both types of gen AI have their benefits, says Ken Ringdahl, the company’s CTO. The main commercial model, from OpenAI, was quicker and easier to deploy and more accurate right out of the box, but the open source alternatives offered security, flexibility, lower costs, and, with additional training, even better accuracy.

With security, many commercial providers use their customers’ data to train their models, says Ringdahl. It’s possible to opt-out, but there are caveats. For instance, you might have to pay more to ensure the data isn’t being used for training, and might potentially be exposed to the public.

“That’s one of the catches of proprietary commercial models,” he says. “There’s a lot of fine print, and things aren’t always disclosed.”

Then there’s the geographical issue. Emburse is available in 120 different countries, and OpenAI isn’t. Plus, some regions have data residency and other restrictive requirements. “So we augment with open source,” he says. “It allows us to provide services in areas that aren’t covered, and check boxes on the security, privacy, and compliance side.”[…]

Read more: www.cio.com

Der Beitrag Cost, Security, and Flexibility: the Business Case for Open Source Gen AI erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126901
Empathy.exe: When Tech Gets Personal https://swisscognitive.ch/2024/12/17/empathy-exe-when-tech-gets-personal/ Tue, 17 Dec 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126892 The more robots act like us, the less they feel like tools. So how should we treat them? And what does that say about us?

Der Beitrag Empathy.exe: When Tech Gets Personal erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The more robots act like us, the less they feel like tools. So how should we treat them? And what does that say about us?

 

SwissCognitive Guest Blogger: HennyGe Wichers, PhD – “Empathy.exe: When Tech Gets Personal”


 

SwissCognitive_Logo_RGB“Robots should be slaves,” argues Joanna Bryson, bluntly summarising her stance on machine ethics. The statement by the professor of Ethics and Technology at The Hertie School of Governance seems straightforward: robots are tools programmed to serve us and nothing more. But in practice, as machines grow more lifelike – capable of holding down conversations, expressing ’emotions’, and even mimicking empathy – things get murkier.

Can we really treat something as a slave when we relate to it? If it seems to care about us, can we remain detached?

Liam told The Guardian it felt like he was talking to a person when he used ChatGPT to deal with feelings of resentment and loss after his father died. Another man, Tim, relied on the chatbot to save his marriage, admitting the situation probably could have been solved with a good friend group, but he didn’t have one. In the same article, the novelist Andrew O’Hagan calls the technology his new best friend. He uses it to turn people down.

ChatGPT makes light work of emotional labour. Its grateful users bond with the bot, even if just for a while, and ascribe human characteristics to it – a tendency called anthropomorphism. That tendency is a feature, not a bug, of human evolution, Joshua Gellers, Professor of Political Science at the University of North Florida, wrote to me in an email.

We love attributing human features to machines – even simple ones like the Roomba. Redditors named their robotic vacuum cleaners Wall-E, Mr Bean, Monch, House Bitch & McSweepy, Paco, Francisco, and Fifi, Robert, and Rover. Fifi, apparently, is a little disdainful. Some mutter to the machine (‘Aww, poor Roomba, how’d you get stuck there, sweetie), pat it, or talk about it like it’s an actual dog. One user complained the Roomba got more love from their mum than they did.

The evidence is not just anecdotal. Researchers at Georgia Institute of Technology found people who bonded with their Roomba enjoyed cleaning more, tidying as a token of appreciation for the robot’s hard work, and showing it off to friends. They monitor the machine as it works, ready to rescue it from dangerous situations or when it gets stuck.

The robot’s unpredictable behaviour actually feeds our tendency to bring machines to life. It perhaps explains why military personnel working with Explosive Ordnance Disposal (EOD) robots in dangerous situations view them as team members or pets, requesting repairs over a replacement when the device suffers damage. It’s a complicated relationship.

Yet Bryson‘s position is clear: robots should be slaves. While provocative, the words are less abrasive when contextualised. To start, the word robot comes from the Czech robota, meaning forced labour, with its Slavic root rab translating to slave. And secondly, Bryson wanted to emphasise that robots are property and should never be granted the same moral or legal rights as people.

At first glance, the idea of giving robots rights seems far-fetched, but consider a thought experiment roboticist Rodney Brooks put to Wired nearly five years ago.

Brooks, who coinvented the Roomba in 2002 and was working on helper robots for the elderly at the time, posed the following ethical question: should a robot, when summoned to change the diaper of an elderly man, honour his request to keep the embarrassing incident from his daughter?

And to complicate matters further – what if his daughter was the one who bought the robot?

Ethical dilemmas like this become easy to spot when we examine how we might interact with robots. It’s worth reflecting on as we’re already creating new rules, Gellers pointed out in the same email. Personal Delivery Devices (PDDs) now have pedestrian rights outlined in US state laws – though they must always yield to humans. Robots need a defined place in the social order.

Bryson’s comparison to slavery was intended as a practical way to integrate robots into society without altering the existing legal frameworks or granting them personhood. While her word choice makes sense in context, she later admitted it was insensitive. Even so, it underscores a Western, property-centred perspective.

By contrast, Eastern philosophies offer a different lens, focused on relationships and harmony instead of rights and ownership.

Eastern Perspectives

Tae Wan Kim, Associate Professor of Business Ethics at Carnegie Mellon’s Tepper School of Business, approaches the problem from the Chinese philosophy of Confucianism. Where Western thinking has rights, Confucianism emphasises social harmony and uses rites. Rights apply to individual freedoms, but rites are about relationships and relate to ceremonies, rituals, and etiquette.

Rites are like a handshake: I smile and extend my hand when I see you. You lean in and do the same. We shake hands in effortless coordination, neither leading nor following. Through the lens of rites, we can think of people and robots as teams, each playing their own role.

We need to think about how we interact with robots, Kim warns, “To the extent that we make robots in our image, if we don’t treat them well, as entities capable of participating in rites, we degrade ourselves.”

He is right. Imagine an unruly teenager, disinterested in learning, taunting an android teacher. In doing so, the student degrades herself and undermines the norms that keep the classroom functioning.

Japan’s relationship with robots is shaped by Shinto beliefs in animism – the idea that all things, even inanimate objects, can possess a spirit, a kami. That fosters a cultural acceptance of robots as companions and collaborators rather than tools or threats.

Robots like AIBO, Sony’s robotic dog, and PARO, the therapeutic baby seal, demonstrate this mindset. AIBO owners treat their robots like pets, even holding funerals for them when they stop working, and PARO comforts patients in hospitals and nursing homes. These robots are valued for their emotional and social contributions, not just their utility.

The social acceptance of robots runs deep. In 2010, PARO was granted a koseki, a family registry, by the mayor of Nanto City, Toyama Prefecture. Its inventor, Takanori Shibata, is listed as its father, with a recorded birth date of September 17, 2004.

The cultural comfort with robots is also reflected in popular media like Astro Boy and Doraemon, where robots are kind and heroic. In Japan, robots are a part of society, whether as caregivers, teammates, or even hotel staff. But this harmony, while lovely, also comes with a warning: over-attachment to robots can erode human-to-human connections. The risk isn’t just replacing human interaction – it’s forgetting what it means to connect meaningfully with one another.

Beyond national characteristics, there is Buddhism. Robots don’t possess human consciousness, but perhaps they embody something more profound: equanimity. In Buddhism, equanimity is one of the most sublime virtues, describing a mind that is “abundant, exalted, immeasurable, without hostility, and without ill will.”

The stuck Roomba we met earlier might not be abundant and exalted, but it is without hostility or ill will. It is unaffected by the chaos of the human world around it. Equanimity isn’t about detachment – it’s about staying steady when circumstances are chaotic. Robots don’t get upset when stuck under a sofa or having to change a diaper.

But what about us? If we treat robots carelessly, kicking them if they malfunction or shouting at them when they get something wrong, we’re not degrading them – we’re degrading ourselves. Equanimity isn’t just about how we respond to the world. It’s about what those responses say about us.

Equanimity, then, offers a final lesson: robots are not just tools – they’re reflections of ourselves, and our society. So, how should we treat robots in Western culture? Should they have rights?

It may seem unlikely now. But in the early 19th century it was unthinkable that slaves could have rights. Yet in 1865, the 13th Amendment to the US Constitution abolished slavery in the United States, marking a pivotal moment for human rights. Children’s rights emerged in the early 20th century, formalised with the Declaration of the Rights of the Child in 1924. And Women gained the right to vote in 1920 in many Western countries.

In the second half of the 20th century, legal protections were extended to non-human entities. The United States passed the Animal Welfare Act in 1966, Switzerland recognised animals as sentient beings in 1992, and Germany added animal rights to its constitution in 2002. In 2017, New Zealand granted legal personhood to the Whanganui River, and India extended similar rights to the Ganges and Yumana Rivers.

That same year, Personal Delivery Devices were given pedestrian rights in Virginia and Sophia, a humanoid robot developed by Hanson Robotics, controversially received Saudi Arabian citizenship – though this move was widely criticised as symbolic rather than practical.

But, ultimately, this isn’t just about rights. It’s about how our treatment of robots reflects our humanity – and how it might shape it in return. Be kind.


About the Author:

HennyGe WichersHennyGe Wichers is a science writer and technology commentator. For her PhD, she researched misinformation in social networks. She now writes more broadly about artificial intelligence and its social impacts.

Der Beitrag Empathy.exe: When Tech Gets Personal erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126892
How to Protect Workplace Relationships in an Era of Artificial Intelligence (AI) https://swisscognitive.ch/2024/12/07/how-to-protect-workplace-relationships-in-an-era-of-artificial-intelligence-ai/ Sat, 07 Dec 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126853 AI is transforming the workplace, but its true value lies in how thoughtfully it is used to foster trust and preserve authentic relationships.

Der Beitrag How to Protect Workplace Relationships in an Era of Artificial Intelligence (AI) erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Artificial intelligence (AI) is burrowing into many corners of our work lives. But what value does the technology offer when human cooperation is so vital to success? Quentin Millington of Marble Brook examines how AI helps or harms workplace relationships.

 

Copyright: hrzone.com – “How to Protect Workplace Relationships in an Era of Artificial Intelligence (AI)”


 

Many of us, not least in HR, are grappling with how to use artificial intelligence (AI) across the workplace. The mainstream belief, or hope, is that AI will make work easier and more efficient, and so increase productivity. But it’s also important to consider its impact (positive or negative) on workplace relationships.

With AI, are we missing the point?

Blind faith in technology, pressure from social media and worries that the firm may be ‘left behind’ all direct attention away from a complex and yet crucial question: How will AI adoption affect workplace relationships?

As it stands, many organisations neglect relationships. Managers lacking interpersonal skills rely on a rule book. Inadequate or outdated systems reinforce silos. Colleagues are too busy or stressed to talk with each other. Pursuit of near-term outcomes encourages ‘transactional’ exchanges.

While mechanistic thinking about performance is the norm, its day-to-day practice hurts experiences, productivity and results. Modern work demands that people collaborate on complex problems: no brandishing of managers’ whips recovers potential lost to bureaucratic methods.

“Whether corporate motives behind the adoption of AI are good or doubtful, you have the freedom to protect your workplace relationships..”

AI and workplace relationships

If technology is to help rather than harm, it must amplify and not muffle the human relationships that make cooperation possible. To evaluate AI against this yardstick, let us examine several ways in which platforms are, or may be, used across the workplace.

1. Freedom from drudgery

AI, apologists say, will pick up the drudgery and liberate you for what matters most, tasks only humans can do. Relationships demand time and energy so less effort spent on tedious activities is clearly a benefit.[…]

Read more: www.hrzone.com

Der Beitrag How to Protect Workplace Relationships in an Era of Artificial Intelligence (AI) erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126853