Law Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/industry/law/ SwissCognitive | AI Ventures, Advisory & Research, committed to Unleashing AI in Business Tue, 25 Feb 2025 12:54:53 +0000 en-US hourly 1 https://wordpress.org/?v=6.8 https://i0.wp.com/swisscognitive.ch/wp-content/uploads/2021/11/cropped-SwissCognitive_favicon_2021.png?fit=32%2C32&ssl=1 Law Archives - SwissCognitive | AI Ventures, Advisory & Research https://swisscognitive.ch/industry/law/ 32 32 163052516 The Relentless Tide of Technological Disruption: Are You Ready? https://swisscognitive.ch/2025/02/25/the-relentless-tide-of-technological-disruption-are-you-ready/ Tue, 25 Feb 2025 12:54:53 +0000 https://swisscognitive.ch/?p=127212 The future belongs to those who adapt—AI, automation, blockchain and digital disruption are reshaping industries.

Der Beitrag The Relentless Tide of Technological Disruption: Are You Ready? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The future belongs to those who adapt—AI, automation, blockchain and digital disruption are reshaping industries.

 

SwissCognitive Guest Blogger: Samir Anil Jumade – “The Relentless Tide of Technological Disruption: Are You Ready?”


 

SwissCognitive_Logo_RGBThe world is evolving at an unprecedented pace, driven by rapid technological advancements. Many industries that once seemed invincible have either vanished or are on the verge of collapse due to their failure to adapt. The rise of artificial intelligence (AI), automation, blockchain, and digital platforms is fundamentally reshaping how businesses operate.

In this article, we explore how past giants like Kodak and Nokia disappeared, how today’s industries are facing a similar existential crisis, and how individuals and businesses must prepare for this inevitable transformation.

The Rise and Fall of Industry Giants

Remember Kodak? In 1997, they employed 160,000 people and dominated the photography market, with their cameras capturing 85% of the world’s images. Fast forward a few years, and the rise of mobile phone cameras decimated Kodak, leading to bankruptcy and the loss of all those jobs. Kodak’s story isn’t unique. A host of once-dominant companies, like HMT, Bajaj, Dyanora, Murphy, Nokia, Rajdoot, and Ambassador, failed to adapt and were swept aside by the relentless tide of technological change. These weren’t inferior products; they simply couldn’t evolve with the times.

This isn’t just a nostalgic look back. It’s a stark warning. The world is changing faster than ever, and we’re on the cusp of another massive transformation – the Fourth Industrial Revolution. Think about how much has changed in the last decade. Now imagine the next ten years. Experts predict that 70-90% of today’s jobs will be obsolete within that time frame. Are we prepared?

Look at some of today’s giants. Uber, the world’s largest taxi company, owns no cars. Airbnb, the biggest hotel chain, owns no hotels. These companies, built on software and connectivity, are disrupting traditional industries and redefining how we live and work. This disruption is happening across all sectors.

Consider the legal profession. AI-powered legal software like IBM Watson can analyze cases and provide advice far more efficiently than human lawyers. Similarly, in healthcare, diagnostic tools can detect diseases like cancer with greater accuracy than human doctors. These advancements, while offering immense potential benefits, also threaten to displace a significant portion of the workforce.

The automotive industry is another prime example. Self-driving cars are no longer science fiction; they’re a rapidly approaching reality. Imagine a world where 90% of today’s cars are gone, replaced by autonomous electric or hybrid vehicles. Roads would be less congested, accidents drastically reduced, and the need for parking and traffic enforcement would dwindle. But what happens to the millions of people whose livelihoods depend on driving, car insurance, or related industries?

Even the way we handle money is transforming. Cash is becoming a relic of the past, replaced by “plastic money” and, increasingly, mobile wallets like Paytm. This shift towards digital transactions offers convenience and efficiency, but also raises questions about security, privacy, and the future of traditional banking.

From STD Booths to Smartphones: A Revolution in Communication

Think back to the time when STD booths lined our streets. These public call offices were once essential for long-distance communication. But the advent of mobile phones sparked a revolution that swept STD booths into obsolescence. Those who adapted transformed into mobile recharge shops, only to be disrupted again by the rise of online mobile recharging. Today, mobile phone sales are increasingly happening directly through e-commerce platforms like Amazon and Flipkart, further highlighting the rapid pace of change.

The Evolving Definition of Money

The concept of money itself is undergoing a radical transformation. We’ve moved from cash to credit cards, and now mobile wallets are gaining traction. This shift offers convenience and efficiency, but it also has broader implications. As we move towards a cashless society, we need to consider the potential impact on financial inclusion, security, and privacy.

The Message is Clear: Adapt or Be Left Behind

The message is clear: adaptation is no longer a choice; it’s a necessity. We must embrace lifelong learning and upskilling to navigate this rapidly changing landscape. We need to foster creativity, critical thinking, and problem-solving skills – qualities that are difficult for machines to replicate. The future belongs to those who can innovate, adapt, and thrive in a world increasingly shaped by technology. The question is: will you be ready?

Additional Points to Consider:

· The environmental impact of technological advancements, both positive and negative.

· The ethical considerations surrounding AI and automation.

· The role of government and education in preparing the workforce for the future.

· The potential for new industries and job roles to emerge. By staying informed and proactive, we can harness the power of technology to create a better future for all.

References:

  1. D. Deming, P. Ong, and L. H. Summers, “Technological Disruption in the Labor Market,” National Bureau of Economic Research, Working Paper No. 33323, Jan. 2025.
  2. K. Hötte, M. Somers, and A. Theodorakopoulos, “Technology and Jobs: A Systematic Literature Review,” arXiv preprint arXiv:2204.01296, Apr. 2022.
  3. D. Acemoglu and P. Restrepo, “Assessing the Impact of Technological Change on Similar Occupations,” Proceedings of the National Academy of Sciences, vol. 119, no. 40, e2200539119, Oct. 2022.
  4. D. Acemoglu and P. Restrepo, “Occupational Choice in the Face of Technological Disruption,” National Bureau of Economic Research, Working Paper No. 29407, Oct. 2021. 5.S. Y. Lu and R. Zhao, “Artificial Intelligence for Data Classification and Protection in Cross-Border Transfers,” IEEE Transactions on Big Data, vol. 7, no. 3, pp. 536-545, 2021.

About the Author:

Samir Anil JumadeSamir Jumade is a passionate and experienced Blockchain Engineer with over three years of expertise in Ethereum and Bitcoin ecosystems. As a Senior Blockchain Engineer at Woxsen University, he has led innovative projects, including the Woxsen Stock Exchange and Chain Reviews, leveraging smart contracts, full nodes, and decentralized applications. With a strong background in Solidity, Web3.js, and backend technologies, Samir specializes in optimizing transaction processing, multisig wallets, and blockchain architecture.

Der Beitrag The Relentless Tide of Technological Disruption: Are You Ready? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127212
AI Takes Center Stage at Davos 2025: A SwissCognitive Perspective https://swisscognitive.ch/2025/01/25/ai-takes-center-stage-at-davos-2025-a-swisscognitive-perspective/ Sat, 25 Jan 2025 15:57:43 +0000 https://swisscognitive.ch/?p=127150 Davos 2025 showcased AI's role in driving global collaboration, ethical governance, open-source innovation alongside national investments.

Der Beitrag AI Takes Center Stage at Davos 2025: A SwissCognitive Perspective erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The discussions at Davos 2025 highlighted AI’s growing influence on global collaboration, ethical governance, and the evolving balance between national investments and open-source innovation.

 

Dalith Steiger-Gablinger, Co-Founder SwissCognitive – “AI Takes Center Stage at Davos 2025: A SwissCognitive Perspective”


 

As the snow-capped peaks of Davos played host to the World Economic Forum 2025, the air was thick with excitement and a palpable sense of urgency. This year’s theme, “Collaboration for the Intelligent Age,” set the stage for intense discussions on artificial intelligence (AI) and its potential to reshape our world. As co-founders of SwissCognitive, Andy Fitze and I, Dalith Steiger, had the privilege of being flies on the wall at various public side events, soaking in the insights and debates that unfolded.

The buzz around AI was impossible to ignore, with sessions ranging from “Harnessing AI for Social Innovation” to “The Pulse of AI Innovation”. Clearly, the technology has moved beyond mere hype and into the realm of transformative force. As James Ong, one of the panellists, aptly put it, “We need to rethink the philosophy and the relationship between AI and human beings.” AI is not just a tool; it’s a paradigm shift that will redefine how we work, live, and interact with the world around us.”

We need to rethink the philosophy and the relationship between AI and human beings.” James Ong, Founder and Director of Artificial Intelligence International Institute [AIII]

 

One of the most striking aspects of the discussions was the emphasis on collaboration. Gone are the days of siloed AI development. The consensus at Davos was clear: to harness the full potential of AI and ensure its benefits are widely distributed, we need unprecedented levels of cooperation between governments, businesses, and civil society.

Another discussion that deeply resonates with our vision at SwissCognitive is the AI discussion in avoiding the pitfalls of the digital divide, emphasising the need for AI to “lift all boats” rather than exacerbate existing inequalities. We strongly advocated for inclusive AI development.

The ethical implications of AI were another hot topic. The sentiment that we are not just building algorithms; we are shaping the future of humanity was echoed across multiple panels, with discussions ranging from AI’s impact on privacy to its potential to either mitigate or exacerbate climate change.

As we navigated the bustling streets of Davos, Andy and I found ourselves in impromptu discussions with fellow attendees. One of the enlightening discussions was while waiting for the Meta hot chocolate or queuing for the entrance of the Dome. One thing that was present through all our exchanges. People engaged openly, with respect and humour.

The energy was infectious, with everyone from startup founders to policymakers eager to share their perspectives on AI’s future. One conversation that stuck with us was with a young entrepreneur who’s using AI to tackle food waste in developing countries. It was a powerful reminder of AI’s potential to address some of our most pressing global challenges and SDGs.

The governance of AI emerged as a critical theme throughout the forum. With the rapid pace of AI development, there’s a growing recognition that our regulatory frameworks need to evolve just as quickly. The call for adaptive, agile governance structures was loud and clear. We shouldn’t govern 21st-century technology with 20th-century laws!

“We shouldn’t govern 21st-century technology with 20th-century laws!” during a Chatham rules debate

 

Perhaps the most stimulating discussions, however, centred around the potential of AI to complement human capabilities rather than replace them. AI should be seen as a co-pilot, not an autopilot. As advocates of collaboration between humans and AI, Andy and I were heartened to hear leaders from different sectors emphasise the importance of involving humans in development.

“AI should be seen as a co-pilot, not an autopilot.” during a Chatham rules debate

 

The Open Source Revolution: A Game-Changer in the Global AI Race

Another topic that consistently emerged in our conversations was the growing importance of open source in AI development. This trend is not just reshaping the technological landscape; it’s also challenging the traditional narrative of national AI supremacy.

The United States’ commitment to investing a staggering $500 billion in AI over the next three years is undoubtedly headline-grabbing. However, as Yann LeCun, VP & Chief AI Scientist at Meta, astutely pointed out during several discussions in Davos, the real story might be the rise of open-source models rather than any single nation’s dominance.

LeCun’s perspective is particularly illuminating: “To people who see the performance of DeepSeek and think: ‘China is surpassing the US in AI.’ You are reading this wrong. The correct reading is: ‘Open source models are surpassing proprietary ones.'”

Open source LLM models are surpassing proprietary ones.” Yann LeCun, VP & Chief AI Scientist at Meta

 

This shift towards open source is democratising AI development on a global scale. LeCun explained that “DeepSeek has profited from open research and open source (e.g. PyTorch and Llama from Meta). They came up with new ideas and built them on top of other people’s work. Because their work is published and open source, everyone can profit from it. That is the power of open research and open source.”

Indeed, the open-source movement in AI is gaining momentum rapidly. Models like Llama 2, Mistral, and DeepSeek are not just matching but, in some cases, surpassing the capabilities of proprietary giants like GPT-4 and Google Gemini. This trend is reshaping the AI ecosystem, offering adaptability, cost-efficiency, and privacy compliance that many enterprises find increasingly attractive.

The implications of this shift are profound. While national investments like the U.S.’s $500 billion commitment are crucial, the collaborative nature of open-source development means that innovations can come from anywhere. This global pool of talent and ideas could potentially accelerate AI development far beyond what any single nation or company could achieve alone.

Moreover, the open source movement aligns with the growing calls for AI transparency and accountability. One tech executive at Davos noted, “We’re not just building algorithms; we’re shaping the future of humanity.” Open source development allows for greater scrutiny and collective problem-solving, potentially leading to safer and more ethical AI systems.

We’re not just building algorithms; we’re shaping the future of humanity.” CEO during a Panel in Davos

 

As we reflect on the discussions at Davos, it’s clear that the future of AI is not just about who can invest the most money. It’s about fostering a global ecosystem of innovation, collaboration, and shared progress. The rise of open source in AI is not just a technological trend; it’s a paradigm shift that could redefine how we approach some of the world’s most pressing challenges.

In this new landscape, the winners will not necessarily be the nations or companies with the deepest pockets but those who can best harness the collective intelligence of the global AI community. As we move forward, it will be fascinating to see how this open-source revolution continues to shape the future of AI and, by extension, our world.

In this new landscape, the winners will not necessarily be the nations or companies with the deepest pockets, but those who can best harness the collective intelligence of the global AI community.” Andy Fitze, Co-Founder SwissCognitive

 

As the forum drew to a close, we left Davos with a sense of cautious optimism. The challenges ahead are significant, but so too is the collective will to address them. The conversations made it clear that we are at a pivotal moment in the development of AI, and the decisions we make now will shape its trajectory for years to come. This future belongs to the young generations. We, the older generation, must be aware that every decision we make won’t affect us, as it will affect the younger generations! This responsibility is imperative!

As we return to our work at SwissCognitive, we’re more energised than ever to continue fostering dialogue and collaboration in AI. The insights gained at Davos will undoubtedly inform our efforts to build a future where AI truly lifts all boats, creating a rising tide of innovation and prosperity for all.

We are the change we wanna see”, Yip Thy Diep Ta, Founder & CEO @ J3D.AI, House of Collaboration

 

In reflecting on our experience, Andy remarked, “The technical possibilities of AI are astounding, but it’s the human ingenuity in applying these technologies that will truly change the world.” I couldn’t agree more, adding, “AI has the power to amplify our human potential, but only if we approach its development with empathy, wisdom, and a commitment to inclusivity.

Der Beitrag AI Takes Center Stage at Davos 2025: A SwissCognitive Perspective erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127150
Who Owns the Sound? AI in Music and the Legal Landscape https://swisscognitive.ch/2025/01/21/who-owns-the-sound-ai-in-music-and-the-legal-landscape/ Tue, 21 Jan 2025 12:09:04 +0000 https://swisscognitive.ch/?p=127063 AI-generated music challenges copyright laws, sparking debates on ownership, compliance, and protecting artists' rights.

Der Beitrag Who Owns the Sound? AI in Music and the Legal Landscape erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
AI-generated music is challenging traditional copyright frameworks, raising questions about ownership, legal compliance, and the balance between AI innovation and protecting artists’ creative rights.

 

SwissCognitive Guest Blogger: Shivi Gupta – “Who Owns the Sound? AI in Music and the Legal Landscape”


 

SwissCognitive_Logo_RGBAttending a live gig, enjoying the music from your favourite artist or band? What if it can come to your couch, with the feeling that they are performing right there in front of you? But hey, who is producing the music? Is it AL or Al

The creation is revered, but more than the creation, the creators are worshipped. Recently, Sony, Universal, and Warner have sued Suno and Udio (GenAI music startups), claiming copyright infringement in training models, to protect the artists affiliated with these giants.

Major record labels are protecting their clients, the artists, the great ones who produce music that can rarely be replicated. But in the day and age of generative AI or (GPT Generative Pretrained transformer), music is also replicated by machine learning algorithms to make songs sound like the original creators.

As one of the popular web3 music websites Unchainedmusic.io wrote in their article “Deepfake vocal synthesizers, an innovation in AI technology, can make a singer’s voice sound like a famous artist. Under English and EU law, it is unlikely that a style of singing, whether generated through deep learning, AI or vocal imitation, is protectable by copyright. However, other forms of intellectual property, such as passing off, may be relevant in some jurisdictions.”

There is no common universal law against intellectual property, and most countries have their own rules, copyrights, patents. Any commercial use begets a request or a permission from the creator who owns the intellectual property of their voice.

Problem:

All music can be created eclectically with different styles, lyrics and genres. GenAI music might saturate the market with more and more music generated by machine learning algorithms.

Possibility:

Music lovers will rely on humans creating songs as it has the emotional factor, the timber, tone, pitch, stretch, diction, accent are some of the unique human characteristics which helps us being empathetic and understanding of the singer’s mindset.

Probability:

These AI created songs will be used by ad companies and video editors to feature a product or sell a service with an attractive UX.

Musicians will continue creating great records and go on tours, and fill stadiums.

Editors, marketers, sales representatives will use GenAI music in elevators, advertisements, branding, showcase of their products and services. The GenAI music will complement the product *-as-a-service.

Proposed Solution:

Follow rules created by the countries in which these AI tools are used. For classical music the law states that as mentioned by edwardslaw.ca “it covers original literary, dramatic, musical, and artistic works of authorship. This is during the lifetime of the author, the remainder of the calendar year in which the author dies, plus 70 additional years (the Canadian copyright lifespan recently increased from 50 to 70 years in June of 2022). Once this term expires, the work becomes public domain. “ So works from Beethoven, Mozart et al. can be performed in public without permission or paying a fee – royalty free. So any music which has been recorded prior to 1974 can be used since it has entered the public domain, but if you have the London symphony orchestra uploaded their recording of Beethoven’s symphony number 5, one can’t use that without the permission from the orchestra.

For example this particular youtube video can’t be reused without BBC’s permission:

Who Owns the Sound- AI in Music and the Legal Landscape

More on copyright of voices: “According to Herndon, much of vocal mimicry comes down to personality rights. “You cannot copyright a voice, but an artist retains exclusive commercial rights to their name and you cannot pass off a song as coming from them without their consent,” she wrote in a recent Twitter thread, citing previous legal cases related to vocal impersonation.”

More about ethics in AI.


About the Author:

Shivi GuptaShivi Gupta is a  passionate data scientist and full-stack developer, working in the industry for over a decade. An AI expert navigating through the world of real vs generated content. With a focus on ethics , he creates websites, mobile applications, chatbots all powered by AI.

 

Der Beitrag Who Owns the Sound? AI in Music and the Legal Landscape erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127063
What Happens When AI Commodifies Emotions? https://swisscognitive.ch/2025/01/14/what-happens-when-ai-commodifies-emotions/ Tue, 14 Jan 2025 04:44:00 +0000 https://swisscognitive.ch/?p=127041 The latest AI developments might turn empathy into just another product for sale, raising questions about ethics and regulation.

Der Beitrag What Happens When AI Commodifies Emotions? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The latest AI developments turn empathy into just another product for sale, raising questions about ethics and regulation.

 

SwissCognitive Guest Blogger:  HennyGe Wichers, PhD – “What Happens When AI Commodifies Emotions?”


 

SwissCognitive_Logo_RGBImagine your customer service chatbot isn’t just solving your problem – it’s listening, empathising, and sounding eerily human. It feels like it cares. But behind the friendly tone and comforting words, that ‘care’ is just a product, finetuned to steer your emotions and shape your decisions. Welcome to the unsettling reality of empathetic AI, where emotions and mimicked – and monetised.

In 2024, empathetic AI took a leap forward. Hume.AI gave large language models voices that sound convincingly expressive and a perceptive ear to match. Microsoft’s Copilot got a human voice and an emotionally supportive attitude, while platforms like Character.ai and Psychologist sprouted bots that mimic therapy sessions. These developments are paving the way for a new industry: Empathy-as-a-Service, where emotional connection isn’t just simulated, it’s a product: packaged, scaled, and sold.

This is not just about convenience – but about influence. Empathy-as-a-Service (EaaS), an entirely hypothetical but now plausible product, could blur the line between genuine connection and algorithmic mimicry, creating systems where simulated care subtly nudges consumer behaviour. The stakes? A future where businesses profit from your emotions under the guise of customer experience. And for consumers on the receiving end, that raises some deeply unsettling questions.

A Hypothetical But Troubling Scenario

Take an imaginary customer service bot. One that helps you find your perfect style and fit – and also tracks your moods and emotional triggers. Each conversation teaches it a little more about how to nudge your behaviour, guiding your decisions while sounding empathetic. What feels like exceptional service is, in reality, a calculated strategy to lock in your loyalty by exploiting your emotional patterns.

Traditional loyalty programs, like the supermarket club card or rewards card, pale in comparison. By analysing preferences, moods, and triggers, empathetic AI digs into the most personal corners of human behaviour. For businesses, it’s a goldmine; for consumers, it’s a minefield. And it raises a new set of ethical questions about manipulation, regulation, and consent.

The Legal Loopholes

Under the General Data Protection Regulation (GDPR), consumer preferences are classified as personal data, not sensitive data. That distinction matters. While GDPR requires businesses to handle personal data transparently and lawfully, it doesn’t extend the stricter protections reserved for health, religious beliefs, or other special categories of information. This leaves businesses free to mine consumer preferences in ways that feel strikingly personal – and surprisingly unregulated.

The EU AI Act, introduced in mid-2024, goes one step further, requiring companies to disclose when users are interacting with AI. But disclosure is just the beginning. The AI Act doesn’t touch using behavioural data or mimicking emotional connection. Joanna Bryson, Professor of Ethics & Technology at the Hertie School, noted in a recent exchange: “It’s actually the law in the EU under the AI Act that people understand when they are interacting with AI. I hope that might extend to mandating reduced anthropomorphism, but it would take some time and court cases.”

Anthropomorphism, the tendency to project human qualities onto non-humans, is ingrained in human nature. Simply stating that you’re interacting with an AI doesn’t stop it. The problem is that it can lull users into a false sense of trust, making them more vulnerable to manipulation.

Empathy-as-a-Service could transform customer experiences, making interactions smoother, more engaging, and hyper-personalised. But there’s a cost. Social media already showed us what happens when human interaction becomes a commodity – and empathetic AI could take that even further. This technology could go beyond monetising attention to monetising emotions in deeply personal and private ways.

A Question of Values

As empathetic AI becomes mainstream, we have to ask: are we ready for a world where emotions are just another digital service – scaled, rented, and monetised? Regulation like the EU AI Act is a step in the right direction, but it will need to evolve fast to keep pace with the sophistication of these systems and the societal boundaries they’re starting to push.

The future of empathetic AI isn’t just a question of technological progress – it’s a question of values. What kind of society do we want to build? As we stand on the edge of this new frontier, the decisions we make today will define how empathy is shaped, and sold, in the age of AI.


About the Author:

HennyGe Wichers is a technology science writer and reporter. For her PhD, she researched misinformation in social networks. She now writes more broadly about artificial intelligence and its social impacts.

Der Beitrag What Happens When AI Commodifies Emotions? erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
127041
How Countries Are Using AI to Predict Crime https://swisscognitive.ch/2024/12/23/how-countries-are-using-ai-to-predict-crime/ Mon, 23 Dec 2024 10:53:39 +0000 https://swisscognitive.ch/?p=126927 To predict future crimes seems like something from a sci-fi novel — but already, countries are using AI to forecast misconduct.

Der Beitrag How Countries Are Using AI to Predict Crime erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
Countries aren’t only using AI to organize quick responses to crime — they’re also using it to predict crime. The United States and South Africa have AI crime prediction tools in development, while Japan, Argentina, and South Korea have already introduced this technology into their policing. Here’s what it looks like.

 

SwissCognitive Guest Blogger: Zachary Amos – “How Countries Are Using AI to Predict Crime”


 

A world where police departments can predict when, where and how crimes will occur seems like something from a science fiction novel. Thanks to artificial intelligence, it has become a reality. Already, countries are using this technology to forecast misconduct.

How Do AI-Powered Crime Prediction Systems Work?

Unlike regular prediction systems — which typically use hot spots to determine where and when future misconduct will be committed — AI can analyze information in real time. It may even be able to complete supplementary tasks like summarizing a 911 call, assigning a severity level to a crime in progress or using surveillance systems to tell where wanted criminals will be.

A machine learning model evolves as it processes new information. Initially, it might train to find hidden patterns in arrest records, police reports, criminal complaints or 911 calls. It may analyze the perpetrator’s demographic data or factor in the weather. The goal is to identify any common variable that humans are overlooking.

Whether the algorithm monitors surveillance camera footage or pours through arrest records, it compares historical and current data to make forecasts. For example, it may consider a person suspicious if they cover their face and wear baggy clothes on a warm night in a dark neighborhood because previous arrests match that profile.

Countries Are Developing AI Tools to Predict Crime

While these countries don’t currently have official AI prediction tools, various research groups and private police forces are developing solutions.

  • United States

Violent and property crimes are huge issues in the United States. For reference, a burglary occurs every 13 seconds — almost five times per minute — causing an average of $2,200 in losses. Various state and local governments are experimenting with AI to minimize events like these.

One such machine learning model developed by data scientists from the University of Chicago uses publicly available information to produce output. It can forecast crime with approximately 90% accuracy up to one week in advance.

While the data came from eight major U.S. cities, it centered around Chicago. Unlike similar tools, this AI model didn’t depict misdemeanors and felonies as hot spots on a flat map. Instead, it considered cities’ complex layouts and social environments, including bus lines, street lights and walkways. It found hidden patterns using these previously overlooked factors.

  • South Africa

Human trafficking is a massive problem in South Africa. For a time, one anti-human trafficking non-governmental organization was operating at one of the country’s busiest airports. After the group uncovered widespread corruption, their security clearance was revoked.

At this point, the group needed to lower its costs from $300 per intercept to $50 to align with funding and continue their efforts. Its members believed adopting AI would allow them to do that. With the right data, they could save more victims while keeping costs down.

Some Are Already Using AI Tools to Predict Crime

Governments have much more power, funding and data than nongovernmental organizations or research groups, so their solutions are more comprehensive.

  • Japan

Japan has an AI-powered app called Crime Nabi. The tool — created by the startup Singular Perturbations Inc. — is at least 50% more effective than conventional methods. Local governments will use it for preventive patrols.

Once a police officer enters their destination in the app, it provides an efficient route that takes them through high-crime areas nearby. The system can update if they get directed elsewhere by emergency dispatch. By increasing their presence in dangerous neighborhoods, police officers actively discourage wrongdoing. Each patrol’s data is saved to improve future predictions.

Despite using massive amounts of demographic, location, weather and arrest data — which would normally be expensive and incredibly time-consuming — Crime Nabi processes faster than conventional computers at a lower cost.

  • Argentina

Argentina’s Ministry of Security recently announced the Artificial Intelligence Applied to Security Unit, which will use a machine learning model to make forecasts. It will analyze historical data, scan social media, deploy facial recognition technology and process surveillance footage.

This AI-powered unit aims to catch wanted persons and identify suspicious activity. It will help streamline prevention and detection to accelerate investigation and prosecution. The Ministry of Security seeks to enable a faster and more precise police response.

  • South Korea

A Korean research team from the Electronics and Telecommunications Research Institute developed an AI they call Dejaview. It analyzes closed-circuit television (CCTV) footage in real time and assesses statistics to detect signs of potential offenses.

Dejaview was designed for surveillance — algorithms can process enormous amounts of data extremely quickly, so this is a common use case. Now, its main job is to measure risk factors to forecast illegal activity.

The researchers will work with Korean police forces and local governments to tailor Dejaview for specific use cases or affected areas. It will mainly be integrated into CCTV systems to detect suspicious activity.

Is Using AI to Stop Crime Before It Occurs a Good Idea?

So-called predictive policing has its challenges. Critics like the National Association for the Advancement of Colored People argue it could increase racial biases in law enforcement, disproportionately affecting Black communities.

That said, using AI to uncover hidden patterns in arrest and police response records could reveal bias. Policy-makers could use these insights to address the root cause of systemic prejudice, ensuring fairness in the future.

Either way, there are still significant, unaddressed concerns about privacy. Various activists and human rights organizations say having a government-funded AI scan social media and monitor security cameras infringes on freedom.

What happens if this technology falls into the wrong hands? Will a corrupt leader use it to go after their political rivals or journalists who write unfavorable articles about them? Could a hacker sell petabytes of confidential crime data on the dark web?

Will More Countries Adopt These Predictive Solutions?

More countries will likely soon develop AI-powered prediction tools. The cat is out of the bag, so to speak. Whether they create apps exclusively for police officers or integrate a machine learning model into surveillance systems, this technology is here to stay and will likely continue to evolve.


About the Author:

Zachary AmosZachary Amos is the Features Editor at ReHack, where he writes about artificial intelligence, cybersecurity and other technology-related topics.

Der Beitrag How Countries Are Using AI to Predict Crime erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126927
Empathy.exe: When Tech Gets Personal https://swisscognitive.ch/2024/12/17/empathy-exe-when-tech-gets-personal/ Tue, 17 Dec 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126892 The more robots act like us, the less they feel like tools. So how should we treat them? And what does that say about us?

Der Beitrag Empathy.exe: When Tech Gets Personal erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The more robots act like us, the less they feel like tools. So how should we treat them? And what does that say about us?

 

SwissCognitive Guest Blogger: HennyGe Wichers, PhD – “Empathy.exe: When Tech Gets Personal”


 

SwissCognitive_Logo_RGB“Robots should be slaves,” argues Joanna Bryson, bluntly summarising her stance on machine ethics. The statement by the professor of Ethics and Technology at The Hertie School of Governance seems straightforward: robots are tools programmed to serve us and nothing more. But in practice, as machines grow more lifelike – capable of holding down conversations, expressing ’emotions’, and even mimicking empathy – things get murkier.

Can we really treat something as a slave when we relate to it? If it seems to care about us, can we remain detached?

Liam told The Guardian it felt like he was talking to a person when he used ChatGPT to deal with feelings of resentment and loss after his father died. Another man, Tim, relied on the chatbot to save his marriage, admitting the situation probably could have been solved with a good friend group, but he didn’t have one. In the same article, the novelist Andrew O’Hagan calls the technology his new best friend. He uses it to turn people down.

ChatGPT makes light work of emotional labour. Its grateful users bond with the bot, even if just for a while, and ascribe human characteristics to it – a tendency called anthropomorphism. That tendency is a feature, not a bug, of human evolution, Joshua Gellers, Professor of Political Science at the University of North Florida, wrote to me in an email.

We love attributing human features to machines – even simple ones like the Roomba. Redditors named their robotic vacuum cleaners Wall-E, Mr Bean, Monch, House Bitch & McSweepy, Paco, Francisco, and Fifi, Robert, and Rover. Fifi, apparently, is a little disdainful. Some mutter to the machine (‘Aww, poor Roomba, how’d you get stuck there, sweetie), pat it, or talk about it like it’s an actual dog. One user complained the Roomba got more love from their mum than they did.

The evidence is not just anecdotal. Researchers at Georgia Institute of Technology found people who bonded with their Roomba enjoyed cleaning more, tidying as a token of appreciation for the robot’s hard work, and showing it off to friends. They monitor the machine as it works, ready to rescue it from dangerous situations or when it gets stuck.

The robot’s unpredictable behaviour actually feeds our tendency to bring machines to life. It perhaps explains why military personnel working with Explosive Ordnance Disposal (EOD) robots in dangerous situations view them as team members or pets, requesting repairs over a replacement when the device suffers damage. It’s a complicated relationship.

Yet Bryson‘s position is clear: robots should be slaves. While provocative, the words are less abrasive when contextualised. To start, the word robot comes from the Czech robota, meaning forced labour, with its Slavic root rab translating to slave. And secondly, Bryson wanted to emphasise that robots are property and should never be granted the same moral or legal rights as people.

At first glance, the idea of giving robots rights seems far-fetched, but consider a thought experiment roboticist Rodney Brooks put to Wired nearly five years ago.

Brooks, who coinvented the Roomba in 2002 and was working on helper robots for the elderly at the time, posed the following ethical question: should a robot, when summoned to change the diaper of an elderly man, honour his request to keep the embarrassing incident from his daughter?

And to complicate matters further – what if his daughter was the one who bought the robot?

Ethical dilemmas like this become easy to spot when we examine how we might interact with robots. It’s worth reflecting on as we’re already creating new rules, Gellers pointed out in the same email. Personal Delivery Devices (PDDs) now have pedestrian rights outlined in US state laws – though they must always yield to humans. Robots need a defined place in the social order.

Bryson’s comparison to slavery was intended as a practical way to integrate robots into society without altering the existing legal frameworks or granting them personhood. While her word choice makes sense in context, she later admitted it was insensitive. Even so, it underscores a Western, property-centred perspective.

By contrast, Eastern philosophies offer a different lens, focused on relationships and harmony instead of rights and ownership.

Eastern Perspectives

Tae Wan Kim, Associate Professor of Business Ethics at Carnegie Mellon’s Tepper School of Business, approaches the problem from the Chinese philosophy of Confucianism. Where Western thinking has rights, Confucianism emphasises social harmony and uses rites. Rights apply to individual freedoms, but rites are about relationships and relate to ceremonies, rituals, and etiquette.

Rites are like a handshake: I smile and extend my hand when I see you. You lean in and do the same. We shake hands in effortless coordination, neither leading nor following. Through the lens of rites, we can think of people and robots as teams, each playing their own role.

We need to think about how we interact with robots, Kim warns, “To the extent that we make robots in our image, if we don’t treat them well, as entities capable of participating in rites, we degrade ourselves.”

He is right. Imagine an unruly teenager, disinterested in learning, taunting an android teacher. In doing so, the student degrades herself and undermines the norms that keep the classroom functioning.

Japan’s relationship with robots is shaped by Shinto beliefs in animism – the idea that all things, even inanimate objects, can possess a spirit, a kami. That fosters a cultural acceptance of robots as companions and collaborators rather than tools or threats.

Robots like AIBO, Sony’s robotic dog, and PARO, the therapeutic baby seal, demonstrate this mindset. AIBO owners treat their robots like pets, even holding funerals for them when they stop working, and PARO comforts patients in hospitals and nursing homes. These robots are valued for their emotional and social contributions, not just their utility.

The social acceptance of robots runs deep. In 2010, PARO was granted a koseki, a family registry, by the mayor of Nanto City, Toyama Prefecture. Its inventor, Takanori Shibata, is listed as its father, with a recorded birth date of September 17, 2004.

The cultural comfort with robots is also reflected in popular media like Astro Boy and Doraemon, where robots are kind and heroic. In Japan, robots are a part of society, whether as caregivers, teammates, or even hotel staff. But this harmony, while lovely, also comes with a warning: over-attachment to robots can erode human-to-human connections. The risk isn’t just replacing human interaction – it’s forgetting what it means to connect meaningfully with one another.

Beyond national characteristics, there is Buddhism. Robots don’t possess human consciousness, but perhaps they embody something more profound: equanimity. In Buddhism, equanimity is one of the most sublime virtues, describing a mind that is “abundant, exalted, immeasurable, without hostility, and without ill will.”

The stuck Roomba we met earlier might not be abundant and exalted, but it is without hostility or ill will. It is unaffected by the chaos of the human world around it. Equanimity isn’t about detachment – it’s about staying steady when circumstances are chaotic. Robots don’t get upset when stuck under a sofa or having to change a diaper.

But what about us? If we treat robots carelessly, kicking them if they malfunction or shouting at them when they get something wrong, we’re not degrading them – we’re degrading ourselves. Equanimity isn’t just about how we respond to the world. It’s about what those responses say about us.

Equanimity, then, offers a final lesson: robots are not just tools – they’re reflections of ourselves, and our society. So, how should we treat robots in Western culture? Should they have rights?

It may seem unlikely now. But in the early 19th century it was unthinkable that slaves could have rights. Yet in 1865, the 13th Amendment to the US Constitution abolished slavery in the United States, marking a pivotal moment for human rights. Children’s rights emerged in the early 20th century, formalised with the Declaration of the Rights of the Child in 1924. And Women gained the right to vote in 1920 in many Western countries.

In the second half of the 20th century, legal protections were extended to non-human entities. The United States passed the Animal Welfare Act in 1966, Switzerland recognised animals as sentient beings in 1992, and Germany added animal rights to its constitution in 2002. In 2017, New Zealand granted legal personhood to the Whanganui River, and India extended similar rights to the Ganges and Yumana Rivers.

That same year, Personal Delivery Devices were given pedestrian rights in Virginia and Sophia, a humanoid robot developed by Hanson Robotics, controversially received Saudi Arabian citizenship – though this move was widely criticised as symbolic rather than practical.

But, ultimately, this isn’t just about rights. It’s about how our treatment of robots reflects our humanity – and how it might shape it in return. Be kind.


About the Author:

HennyGe WichersHennyGe Wichers is a science writer and technology commentator. For her PhD, she researched misinformation in social networks. She now writes more broadly about artificial intelligence and its social impacts.

Der Beitrag Empathy.exe: When Tech Gets Personal erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126892
AI and Criminal Justice: How AI Can Support – Not Undermine – Justice https://swisscognitive.ch/2024/11/29/ai-and-criminal-justice-how-ai-can-support-not-undermine-justice/ Fri, 29 Nov 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126795 AI adoption in criminal justice brings opportunities for efficiency and public safety but requires ethical safeguards.

Der Beitrag AI and Criminal Justice: How AI Can Support – Not Undermine – Justice erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
AI adoption in criminal justice brings opportunities for efficiency and public safety but requires ethical safeguards to prevent risks of bias, misuse, and erosion of trust.

 

Copyright: theconversation.com – “AI and Criminal Justice: How AI Can Support – Not Undermine – Justice”


 

Interpol Secretary General Jürgen Stock recently warned that artificial intelligence (AI) is facilitating crime on an “industrial scale” using deepfakes, voice simulation and phony documents.

Police around the world are also turning to AI tools such as facial recognitionautomated licence plate readersgunshot detection systemssocial media analysis and even police robots. AI use by lawyers is similarly “skyrocketing” as judges adopt new guidelines for using AI.

While AI promises to transform criminal justice by increasing operational efficiency and improving public safety, it also comes with risks related to privacy, accountability, fairness and human rights.

Concerns about AI bias and discrimination are well documented. Without safeguards, AI risks undermining the very principles of truth, fairness, and accountability that our justice system depends on.

In a recent report from the University of British Columbia’s School of Law, Artificial Intelligence & Criminal Justice: A Primer, we highlighted the myriad ways AI is already impacting people in the criminal justice system. Here are a few examples that reveal the significance of this evolving phenomenon.

The promises and perils of police using AI

In 2020, an investigation by The New York Times exposed the sweeping reach of Clearview AI, an American company that had built a facial recognition database using more than three billion images scraped from the internet, including social media, without users’ consent.

Policing agencies worldwide that used the program, including several in Canada, faced public backlash. Regulators in multiple countries found the company had violated privacy laws. It was asked to cease operations in Canada.

Clearview AI continues to operate, citing success stories of helping to exonerate a wrongfully convicted person by identifying a witness at a crime scene; identifying someone who exploited a child, which led to their rescue; and even detecting potential Russian soldiers seeking to infiltrate Ukrainian checkpoints.[…]

Read more: www.theconversation.com

Der Beitrag AI and Criminal Justice: How AI Can Support – Not Undermine – Justice erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126795
Leveraging AI and Blockchain for Privacy and Security in Cross-Border Data Transfers https://swisscognitive.ch/2024/11/19/leveraging-ai-and-blockchain-for-privacy-and-security-in-cross-border-data-transfers/ Tue, 19 Nov 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126718 AI and blockchain enhance privacy and security in cross-border data transfers through automation, encryption, and transparent compliance.

Der Beitrag Leveraging AI and Blockchain for Privacy and Security in Cross-Border Data Transfers erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
With an eye toward privacy and regulatory issues, we investigate the difficulties of cross-border data flows for multinational corporations. It emphasizes how new technologies such as blockchain and artificial intelligence (AI) might improve data security, automate compliance, and guarantee openness, so provide a strong basis for protecting private data all around.

 

SwissCognitive Guest Blogger: Vishal Kumar Sharma – “Leveraging AI and Blockchain for Privacy and Security in Cross-Border Data Transfers”


 

SwissCognitive_Logo_RGBThe globalized world of today depends on the flow of data across boundaries for the operations of international companies to function effectively. Organizations have great difficulties controlling the privacy and security of data across borders as they depend more and more on abroad operations. Different privacy rules, legal systems, and security measures between countries create complexity. So, cross-border data transfers become a major issue for companies trying to keep compliance while guaranteeing seamless corporate operations.

The Growing Concern of Cross-Border Data Transfers

Cross-border data transfers are fraught with legal and operational challenges. Data privacy regulations vary significantly from country to country, leading to uncertainty about compliance and accountability. Regulations such as the European Union’s General Data Protection Regulation (GDPR), the California Consumer Privacy Act (CCPA), and China’s Data Security Law have stringent guidelines for the protection of personal data and restrict the transfer of sensitive information outside their jurisdictions.

Data breaches are one of the main worries about cross-border data exchanges. Data moving across borders could pass via several governments, increasing the possibility of illegal access or mistreatment. Companies have to make sure enough security systems are in place to guard this information against cyberattacks, espionage, and data theft.

Compliance with local rules is another important problem since many times they put severe restrictions on how personal data may be exchanged or used internationally. Ignoring these rules could lead to big fines, bad reputation, and lost client confidence. Moreover, the variations in privacy models can lead to operational inefficiencies since companies must apply multiple data security solutions to satisfy different local needs.

AI for Enhanced Data Privacy in Cross-Border Transfers

By automating and optimizing privacy protections, artificial intelligence (AI) can transform management and security of cross-border data transfers. Some main ways AI might improve data privacy are below:

  1. Automated Data Classification and Encryption: AI systems can automatically find sensitive data depending on pre-defined criteria and apply suitable encryption before exporting it internationally. Different sensitivity level data classification helps AI to guarantee that the most important data gets the best degree of protection. This lessens the possibility of exposure during storage or transportation.
  2. Data Anonymization and Pseudonymization: AI-driven systems can anonymize personal data before it leaves a country’s borders, transforming sensitive information into pseudonymous or anonymized data sets that are more difficult to trace back to individuals. This minimizes privacy risks, especially when handling health, financial, or personally identifiable information (PII).
  3. Real-time Threat Detection and Response: Real-time data transfer and monitoring by artificial intelligence allows it to identify any irregularities or threats in motion. By means of network traffic pattern analysis and risk identification, machine learning models help companies to react fast to new hazards and prevent data breaches before they materialize.
  4. Compliance Monitoring: AI can enable companies to monitor and preserve compliance with many worldwide data protection regulations. AI guarantees that cross-border data transfers follow the necessary legal criteria by always searching for regulatory changes and automatically adjusting data handling systems. This greatly lessens the work for compliance teams and the danger of non-compliance.

Blockchain for Secure and Transparent Data Transfers

With its distributed and unchangeable character, blockchain technology offers a strong basis for improving security and privacy in international data exchanges. Blockchain’s contributions can be as follows:

  1. Decentralized Data Ownership: Establishing unambiguous ownership of data as it passes across several countries can be difficult in cross-border data exchanges. Blockchain lets people and companies keep ownership and control over their data even while it is shared across borders, hence enabling distributed control. Every transaction or data move is noted on a distributed ledger guarantees complete traceability and openness.
  2. Immutable Audit Trails: Blockchain generates an unchangeable audit record of all data transactions, therefore enabling any cross-border data movement to be followed back to its source. This tool is especially helpful in satisfying legal criteria for responsibility and documentation. By presenting an unchangeable record of data transfers, companies can demonstrate proof of compliance and help to prevent legal conflicts and regulatory fines.
  3. Smart Contracts for Automated Compliance: Built on blockchain systems, smart contracts—which represent automated compliance with data privacy rules—can enforce compliance across borders. These agreements can contain clauses guaranteeing that data is managed in compliance with pre-defined policies and that it is transmitted just to countries with sufficient privacy regulations. Should a region fall short of the required privacy criteria, the smart contract can stop the flow, therefore guaranteeing respect to legal systems.
  4. Enhanced Encryption and Data Access Control: Blockchain allows encrypted, peer-to–peer data exchanges, therefore improving security by means of data access control and encryption. Blockchain allows companies to regulate access, therefore guaranteeing that only authorised users may read or change private information while it travels across borders. Moreover, the encryption systems used by blockchain systems make it quite impossible for illegal players to access or control data.

The Synergy of AI and Blockchain in Data Privacy
Even further privacy and security advantages can come from using AI and blockchain together in cross-border data exchanges. While blockchain guarantees safe, open, and auditable data transfers, artificial intelligence may offer intelligent data classification, real-time threat detection, and automatic compliance monitoring.

While blockchain guarantees that every transaction is recorded immutably, thereby offering a reliable log for auditing and legal purposes, artificial intelligence may monitor cross-border transactions, warning potential dangers or compliance issues. Even in difficult international settings, these technologies taken together can create a strong framework for safe and compliant data moves.

Conclusion

International corporations depend on cross-border data exchanges, but they also carry major privacy and security concerns. By means of automated data security, safe transfer methods, and regulatory compliance, artificial intelligence (AI) and blockchain present strong instruments to reduce these threats. Adopting these technologies would help companies to negotiate the complexity of cross-border data transfers with more confidence, therefore ensuring that sensitive data stays encrypted and allowing seamless worldwide operations.

Organizations trying to keep ahead of the curve and safeguard their most important asset data will depend critically on the integration of artificial intelligence and blockchain in data privacy plans as the global regulatory scene changes.

References:

  • T. Scherer, “Data Privacy and Cross-Border Data Flows: Impact of GDPR on International Businesses,” Journal of Data Protection & Privacy, vol. 3, no. 2, pp. 120-132, 2022.
  • Kosciuszko, and P. Heikkilä, “Blockchain-Based Data Management for Secure Cross-Border Transactions,” in Proc. Int. Conf. on Blockchain Technology, 2021, pp. 45-54.
  • Narayanan, V. Shmatikov, “Privacy Concerns in Cross-Border Data Transfer: A Review of Encryption Techniques,” IEEE Security & Privacy, vol. 17, no. 4, pp. 33-40, July-Aug. 2020.
  • Y. Lu and R. Zhao, “Artificial Intelligence for Data Classification and Protection in Cross-Border Transfers,” IEEE Transactions on Big Data, vol. 7, no. 3, pp. 536-545, 2021.
  • Zhang et al., “Smart Contracts for Enforcing Data Privacy Regulations in International Data Transfers,” IEEE Access, vol. 8, pp. 32543-32554, 2020.
  • Behl and K. Pal, “Blockchain-Based Secure Framework for Cross-Border Data Flow and Privacy Preservation,” IEEE Transactions on Information Forensics and Security, vol. 15, pp. 2179-2189, 2020.
  • C. Lin and D. Xu, “AI and Blockchain in Cross-Border Data Transfer: A Synergistic Approach to Privacy Protection,” IEEE Communications Surveys & Tutorials, vol. 23, no. 1, pp. 326-342, 2021.
  • S. W. Brenner, “Global Data Privacy and Cross-Border Data Transfers: Legal Challenges and Solutions,” Harvard Journal of Law & Technology, vol. 34, no. 1, pp. 125-140, 2021.
  • C. O. Martins and M. T. O’Connor, “Blockchain for Cross-Border Data Transfers: Enhancing Security and Compliance,” Journal of Cybersecurity and Privacy, vol. 5, no. 3, pp. 1-18, 2022.
  • K. Hughes, “Artificial Intelligence and Data Privacy: How AI Can Help Manage Cross-Border Data Transfers,” Journal of International Data Privacy Law, vol. 10, no. 2, pp. 85-95, 2020.
  • T. F. Siegel, “Blockchain and Data Sovereignty: Implications for International Data Transfers,” Journal of Global Privacy Law and Security, vol. 3, no. 4, pp. 211-229, 2021.
  • R. K. Gupta and L. Yang, “Leveraging AI for Real-Time Data Protection in Cross-Border Transfers,” Future Internet, vol. 12, no. 6, pp. 1-14, 2020.
  • P. M. Schwartz, “Global Data Flows and the EU-U.S. Privacy Shield: Toward Improved Transatlantic Data Protection,” California Law Review, vol. 106, no. 4, pp. 115-150, 2018.
  • M. Montoya and J. Wells, “Data Anonymization and Blockchain Solutions for Cross-Border Transfers,” International Journal of Information Management, vol. 55, pp. 102-110, 2020.

About the Author:

Vishal Kumar SharmaVishal Kumar Sharma, Senior Project Engineer of AI Research Centre, Woxsen University, India, with over 8 years of experience in team management, PCB design, programming, robotics manufacturing, and project management. He has contributed to multiple patents and is passionate about merging smart work with hard work to drive innovation in AI and robotics.

Der Beitrag Leveraging AI and Blockchain for Privacy and Security in Cross-Border Data Transfers erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126718
AI Pioneers Claim Nobel Prizes: Transforming the Future of Science https://swisscognitive.ch/2024/11/12/ai-pioneers-claim-nobel-prizes-transforming-the-future-of-science/ Tue, 12 Nov 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126665 AI pioneers winning Nobel Prizes highlights the merging of AI with physics and chemistry, pointing to a unified future in science.

Der Beitrag AI Pioneers Claim Nobel Prizes: Transforming the Future of Science erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
The recent Nobel Prizes awarded to AI pioneers showcase the merging of artificial intelligence with physics and chemistry, indicating a shift toward a unified scientific future.

 

SwissCognitive Guest Blogger: Utpal Chakraborty, Chief Digital Officer, Allied Digital Services Ltd., AI & Quantum Scientist – “AI Pioneers Claim Nobel Prizes: Transforming the Future of Science”


 

SwissCognitive_Logo_RGBThe year 2024 will be remembered for generations, marking a historic milestone as artificial intelligence researchers make unprecedented strides in multiple Nobel Prize categories. For the first time, AI pioneers were recognized not solely for advancements in AI itself but for groundbreaking contributions to physics and chemistry. This achievement highlights how the lines between traditional sciences and computer science (specifically AI) are blurring in ways that would have seemed unimaginable just a few decades ago.

The announcement sent ripples through the scientific community when Geoffrey Hinton and John Hopfield shared the Nobel Prize in Physics, while Demis Hassabis along with two other scientists claimed the Chemistry prize. Three brilliant minds known primarily for their AI work, now recognized for transforming our understanding of the physical world.

Geoffrey Hinton and John Hopfield received the Physics Nobel for their work on understanding phase transitions in complex systems through the lens of Neural Computation. Their groundbreaking discovery showed how the mathematics of phase transitions in materials shares fundamental principles with how Neural Networks learn and process information.

Hopfield’s contribution stemmed from his revolutionary 1982 paper (Neural networks and physical systems with emergent collective computational abilities) introducing the Hopfield network, a mathematical model that showed how collections of simple units could exhibit complex behavior similar to phase transitions in physics. The model demonstrated how memory could emerge from the collective behavior of simple components, much like how magnetic properties emerge in materials.

Hinton’s work complemented this by revealing how the principles of statistical mechanics, traditionally used to understand particle behavior in physics, could explain deep learning’s success. His breakthrough came from showing that the way neural networks optimize their weights (Backpropagation) follows the same mathematical principles that govern how physical systems find their lowest energy states.

Of course, many of us know these scientists primarily for their AI contributions:

– Hopfield’s neural networks revolutionized our understanding of associative memory and laid the groundwork for modern deep learning.

– Hinton’s work on backpropagation and deep belief networks essentially created the deep learning revolution we’re experiencing today.

But it’s their ability to bridge these seemingly disparate fields that makes their Physics Nobel Prize so significant. As Hinton once said at a conference, “The brain is a physical system. Why shouldn’t its principles help us understand other physical systems?”

On the other hand, Demis Hassabis’s Chemistry Nobel came for something equally remarkable – using AI principles to solve one of chemistry’s grand challenges, protein folding. His work at DeepMind led to AlphaFold2, but the Nobel recognized his deeper insights into how the principles of reinforcement learning could reveal fundamental rules governing molecular interactions.

The prize specifically acknowledged his team’s discovery of new chemical principles through AI analysis, principles that classical scientists had missed. By training AI systems to understand molecular behavior, they uncovered previously unknown patterns in how proteins fold and interact, revolutionizing our understanding of chemical processes at the molecular level.

Most know Hassabis as the founder of DeepMind and the mind behind AlphaGo, but his journey from AI to chemistry illustrates a broader trend in science. His background in neuroscience and computer games gave him a unique perspective on how complex systems organize themselves, whether they are neural networks, game strategies, or molecular structures.

What makes these Nobel Prizes so fascinating is how they highlight the convergence of different scientific disciplines.

The work of Hinton, Hopfield, and Hassabis shows us that these aren’t separate fields anymore, they are different lenses for viewing the same reality. Their discoveries reveal a deeper unity in science that we are only beginning to appreciate.

As I write this article, I can’t help but feel we are living through a new scientific revolution. The tools of AI aren’t just helping us do traditional science faster; they are fundamentally changing how we think about science itself.

Young researchers today don’t see themselves as just physicists, chemists, or computer scientists. They are explorers in a unified landscape where:

– Physical laws inform neural network design.

– Chemical principles inspire new computing architectures.

– AI algorithms reveal new patterns in nature.

What strikes me most about these Nobel laureates is their humanity. Despite working with machines and mathematical abstractions, they never lost sight of the human element in science.

As someone who has worked in these intersecting fields, I see these Nobel Prizes as more than just recognition of brilliant work. They are a signal that the future of science lies not in specialization, but in synthesis. The next generation of scientists won’t just cross boundaries – they’ll erase them.

These Nobel Prizes aren’t just awards; they are a glimpse of science’s future. A future where the boundaries between classical physics, quantum physics, chemistry, and computation disappear and where artificial intelligence helps us see the unity that was always there.

Der Beitrag AI Pioneers Claim Nobel Prizes: Transforming the Future of Science erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126665
AI Search Could Break the Web https://swisscognitive.ch/2024/11/05/ai-search-could-break-the-web/ Tue, 05 Nov 2024 04:44:00 +0000 https://swisscognitive.ch/?p=126593 AI search tools risk reducing web traffic for creators, highlighting the need for fair compensation systems for online content creation.

Der Beitrag AI Search Could Break the Web erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
AI searching tools may disrupt the digital economy by limiting creators’ exposure, showing the need for fair reward systems to support diverse content creation online.

 

Copyright: technologyreview.com – “AI Search Could Break the Web”


 

SwissCognitive_Logo_RGBIn late October, News Corp filed a lawsuit against Perplexity AI, a popular AI search engine. At first glance, this might seem unremarkable. After all, the lawsuit joins more than two dozen similar cases seeking credit, consent, or compensation for the use of data by AI developers. Yet this particular dispute is different, and it might be the most consequential of them all.

At stake is the future of AI search—that is, chatbots that summarize information from across the web. If their growing popularity is any indication, these AI “answer engines” could replace traditional search engines as our default gateway to the internet. While ordinary AI chatbots can reproduce—often unreliably—information learned through training, AI search tools like Perplexity, Google’s Gemini, or OpenAI’s now-public SearchGPT aim to retrieve and repackage information from third-party websites. They return a short digest to users along with links to a handful of sources, ranging from research papers to Wikipedia articles and YouTube transcripts. The AI system does the reading and writing, but the information comes from outside.

At its best, AI search can better infer a user’s intent, amplify quality content, and synthesize information from diverse sources. But if AI search becomes our primary portal to the web, it threatens to disrupt an already precarious digital economy. Today, the production of content online depends on a fragile set of incentives tied to virtual foot traffic: ads, subscriptions, donations, sales, or brand exposure. By shielding the web behind an all-knowing chatbot, AI search could deprive creators of the visits and “eyeballs” they need to survive.

If AI searches break up this ecosystem, existing law is unlikely to help. Governments already believe that content is falling through cracks in the legal system, and they are learning to regulate the flow of value across the web in other ways. The AI industry should use this narrow window of opportunity to build a smarter content marketplace before governments fall back on interventions that are ineffective, benefit only a select few, or hamper the free flow of ideas across the web.[…]

Read more: www.technologyreview.com

Der Beitrag AI Search Could Break the Web erschien zuerst auf SwissCognitive | AI Ventures, Advisory & Research.

]]>
126593