Introduction: AI’s Emerging Dark Side

AI this, AI that. You know, there’s the film Jurassic Park when Jeff Goldblum’s character states in the first film, Ooh, ah, that’s how it always starts. But then later there’s the running and screaming. Is this something we have to watch out for AI? What we may feel is the shining beacon of progress that’s called AI.

It seems that a more sinister aspect has started creeping into the darkest corners of the criminal realms, infesting the shadows of the digital underworld.

AI in the Underworld: The New Accomplice in Crime

One could say, welcome to the world of true crime, where AI has become the ultimate accomplice, masterminding cyber heists, manipulating minds, and erasing its tracks with cold, calculated, precision. The reason being that AI doesn’t show any form of remorse, which lays, I suppose, a similarity to psychotic behaviour. The renowned psychiatrist Dr. Daniel Ayman once said:

We need to change the conversation around mass murder and school shootings, from one about evil to one about brain health unless we’re all able to be honest and effectively talk about how to improve brain function in those who are violent, the seamless loss of life will continue.

You see, maybe the overreliance on AI-generated insights may inadvertently absolve decision-makers from taking responsibility for their actions.

The Blame Game: Shifting Responsibility to Technology

By placing the blame on the technology, policymakers, even healthcare professionals and educators, May actually avoid making the hard choices necessary to address the root causes of violent behaviour. This could divert resources from more effective, say, human-centred interventions, further undermining efforts to prevent violence.

How can we ensure that decision-makers, including ones I mentioned earlier, are held accountable for their actions and maintain focus on addressing the root causes of violent behaviour, rather than relying solely on maybe AI-generated insights and risking them? The diversion of resources from more effective human-centred interventions.

We have now many key figures such as Elon Musk, who have actually signalled, in fact, they signed an open letter warning the potential risks and saying that artificial intelligence training should actually be suspended amidst fears of a threat to humanity. I think today’s subject, we should go a little bit more deeply into that.

Shall we start?

AI and Humanity: A Balancing Act

Now, I don’t think anyone can actually argue that we’re in the era of artificial intelligence, which is increasingly becoming incorporated into the decision-making process. And there is a growing concern for people to become over-reliant on AI-generated insights, which, you know, may invertedly absolve human decision-making from taking responsibility for their specific actions.

You know, for example, when it comes to preventing violent behaviour. Some people think that we shouldn’t just rely on what the AI says. We should also make sure that humans are still in charge of making decisions and taking responsibility. Some people suggest that we use AI insights together with human-centred approaches to make sure we’re doing things in a responsible way.

I suppose it’s all about finding a balance between the benefits. of AI and the importance of human decision-making. You know, some people believe this is not now, this is going to be the future. But let me tell you, it’s now. As we swing and chance for the success of the likes of AI, have we ever thought about the ethical issues that come with using artificial intelligence?

You know, used to prevent violent behaviour is becoming more common to use AI in this area, but it’s raising concerns about privacy, data biases, and even discrimination against individuals and communities.

Case Studies and Ethics in AI

I think it’s important to look at case studies and real-life examples to see the unintended consequences that could come into or come with using AI to understand and address some sort of violent behaviour.

We also need to make sure that there are guidelines in place to use AI in a responsible and ethical way when it comes to violence prevention. By doing so, we can harness the power of AI for the greater good, while also avoiding potential negative consequences. It’s important to raise awareness about those ethical concerns among policymakers, healthcare professionals, and educators, so we can make informed decisions about how to use AI.

In this specific context, I suppose with the recent aspects of Italy banning chat GPT for those specific reasons will actually make a start on decision-makers, creating a policy of behaviour. And I would like to see where that will actually go, you know, how will that affect our usage of artificial intelligence and are we just going to follow the, the sort of signatories like Elon Musk, as I mentioned earlier, are we just going to follow them and say, right, let’s stop any form of training of AI.

We’ve all grown up on a meal of sci-fi. And we’ve seen, you know, them take over the world and all of that. Is this actually that dangerous for us or what’s going to happen? I suppose it’s too early really to say, but we can see that it’s being used for criminal activities.

Holistic Approach to AI

Maybe we should think about the different fields of study that can work together to prevent violent behaviour.

It’s not just one area of expertise that can solve these complex issues that we’re discussing. We need to draw on insights from psychology, neuroscience, sociology, and even AI technology to develop effective strategies. By combining diverse perspectives and methodologies, we can better understand the root cause of violence and create targeted interventions. I suppose there are challenges to interdisciplinary collaboration as well.

I suppose this could become a practical solution to, um, overcome some of the issues we’re having with ai. The end goal is to show how bridging the gaps between disciplines and fostering collaborative efforts can create a more holistic, effective approach to understanding and preventing violent behaviour in today’s society, and especially if they’re utilising artificial intelligence for that specific arena.

By bringing this collaboration between those different fields, this will enable specific guidelines and regulations for AI use against violent prevention, ensuring responsible applications and avoiding over-reliance on technology-driven insights.

Now, some people may say, you know, well, what is violence?

You know, I use chat GPT. I don’t see anything aggressive about that. It’s not just down to the one application. You can buy connectivity into the application and write your own technology that sits behind it. That doesn’t follow the specific rules that chat GPT actually utilises. So you’re creating something entirely different and we want to move away from that creation.

So it’s not used for criminality. You know, we have seen the connectivity between AI-generated images and AI-generated voices. You know, one minute I can be me, the next minute I can be some famous actor selling an item and visually you wouldn’t notice any sort of difference.

We’ve seen that with Joe Rogan selling specific health products, like a natural Viagra kind of thing.

And you look at the video and you cannot tell that it actually isn’t Joe Rogan. It’s quite incredible. And recently, there was one created for the arrest of Donald Trump. It really looked like Donald Trump had police around him, arresting him. It was quite, you know, shocking in a way. So we need to come up with some level of guidance and guidelines to actually follow.

AI Policies

It’s not about stopping the specific individuals. And the only way we can do this is by analysing what are the best practices, then creating policies and frameworks from these various industries, and maybe even bring in religion if that feels the right thing to do. I suppose this is speculation.

But we want to keep a human-centred world and not have to fall in the arms of AI and for everything to be AI,  in the face of AI’s growing influence and the potential misuse in criminal activities, it’s essential not to lose sight of the positive impacts that AI can have when combined with human centred interventions and Interdisciplinary collaboration.

Now we’re fostering a more nuanced and responsible approach to AI deployment. We can strike that balance between leveraging technology-driven insights and preserving human accountability and addressing the root cause of violent behaviour, especially as we continue to explore new frontiers and policymaking and healthcare and education.

We have to ensure that ethical consideration guidelines and regulations are in place to guide AI’s responsible application in preventing violence. Now, I suppose once those policies have been done, then Italy would probably open the doors for AI to come back into the process. Where does that lead for the training process?

You know, do we let AI keep on training and learning? Ah, I don’t know. That’s a very, very different subject. I suppose that could be the scary sort of scary part as well.

You know, there’s hope as we forge new pathways to create a safer world by harnessing the power of AI and maintaining the human touch and by bridging gaps between disciplines and fostering collaborative efforts and developing a comprehensive strategy, we can tap into the collective potential of AI technology and human ingenuity to tackle the challenges.

Conclusion: Charting the Future of AI and Human Collaboration

Anyway, as we embark on this journey, let us remember that the key to unlocking a brighter and more secure future lies in our abilities to combine the strengths of both technology and humanity, working together to combine the depths of human depravity and creating lasting positive change.

I don’t know. What are your thoughts on this subject? It’s a big subject. And one that’s going to be around for the rest of our lives.

Anyway, I think that’s what we’ll have enough time for today. Feel free to send me a message. Tell me what you think. And remember to, you know, like, follow, subscribe, and also recommend.

Have a lovely evening. Bye bye now.


Leave a Reply

Your email address will not be published.