You will be redirected to sign up.
What phone number are you using?
You must be 18+ to use the product. Clare is not designed to be used in an emergency or to manage psychiatric crises. If you are dealing with abuse, trauma, or crisis - please reach out to Crisis Contacts.
Artificial Intelligence (AI) presents big opportunities for innovation and progress across many sectors. However, it's natural to question the risks that accompany these advancements. Are you concerned about the possibility of AI-driven job displacement or its effects on human relationships and safety? What about crucial issues like data privacy? If you find yourself contemplating these concerns alongside the excitement of AI progress, then you are at the right address!
The article "risks and protective measures for synthetic relationships" serves as an excellent illustration of the level of competence, thoughtfulness, and perspective required to create a safe and ethical AI. Authored by a team of experts, including clare&me's Head of Psychology, Lea, it offers a comprehensive exploration of this critical topic. Since clare&me is committed to the ethical and safe development of AI, we couldn't help but share this article with you!
It briefly discusses the advancements in AI and its enormous potential to help people in a variety of fields, including management, tutoring, and therapy. However, its core focus is on the risks posed by the emerging relationships between humans and AI, known as synthetic relationships, and explores ways to mitigate these risks through policy.
“We call for an open dialogue among researchers, technology companies, civil society organizations, and policy makers that recognize new risks may emerge as synthetic relationships evolve.”
The article outlines the various possibilities that artificial intelligence offers, tracing its evolution from basic assistive systems, like Siri, to applications capable of conducting entire therapy sessions. This new dynamic of AI being a responsive and personalizable tool that can take over so many different tasks and actually bond with humans has seen remarkable development over the past years.
But what comes to mind when you hear names like Siri, Alexa, Replika, or Clare? Do you have an emotional reaction? Do you think of them as people or simply AI bots?
The authors argue that the use of AI in so many different areas of life “gives rise to a new form of relationship between humans and technology: synthetic relationships”. This raises important questions: do we even differentiate between synthetic or human relationships? Does that mean we have the same expectations for an AI that we have towards a human, or do we still differentiate in how we interact and communicate with them? Such concerns—stemming from AI's capacity to influence human thoughts, feelings, and behaviors, sometimes even more profoundly than other humans—are at the heart of this article.
The authors do not sugarcoat the hazards that come with these novel kinds of partnerships. They address a range of risks, from safety and privacy breaches to AI-driven catfishing and manipulation, loss of autonomy, and exposure to harmful content, urging readers to remain vigilant. Additionally, they explore topics such as echo chamber effects, the replacement of human professions, and spillover effects on human-human relationships, each a potential harm brought about by synthetic relationships. So, if you thought human-human relationships can be complicated, synthetic ones take it to a whole new level! But don’t worry - the authors suggest a plan to keep things safe and sound.
Starke et al. (2024) provide practical, human-centered policy recommendations to mitigate the risks associated with synthetic relationships by reviewing existing regulations and evaluating their effectiveness. Their proposed measures include strong privacy protections such as informed disclosure, consent, secure personal data, and zero-knowledge frameworks akin to HIPAA regulations. They also advocate for a clear division between sponsored and organic content, reduced switching costs between synthetic relationship providers, and external auditing to foster trust and accountability.
These recommendations are all about putting users first - protecting their independence, safety, and privacy every step of the way.
Overall, the article shows how important it is to bring scholars with an interdisciplinary background to the table to address emerging phenomena such as synthetic relationships. It not only highlights the potential of AI and synthetic relationships but also critically sheds light on many of the associated risks and provides potential solutions on how to minimize or mitigate these risks, with human-centered policy suggestions all together.
Clare&me is recognizing challenges and risks, such as data privacy, loss of autonomy, and echo chamber effect, that were mentioned by the article and are committed to a thoughtful path in technology development, ensuring that AI is integrated in ways that prioritize user well-being and safety. If you're interested in learning more about our guiding principles or the ethical and safety standards at clare&me, please feel free to reach out.
Reference:
Starke, C., Ventura, A., Bersch, C., Cha, M., de Vreese, C., Doebler, P., ... & Köbis, N. (2024). Risks and protective measures for synthetic relationships. Nature Human Behaviour, 8(10), 1834-1836.
Don't miss out on the New Blog Posts! We cover mental health topics with valuable insights and tips from Clare and our experts.
Get an email notification straight to your inbox.