The Dangers of Artificial Intelligence That No One Is Talking About

Are We Ready for ChatGPT? The Dangers of AI No One Is Talking About



While ChatGPT may look like a harmless and useful free tool, this technology has the potential to reshape our economy and society as we know it drastically. That brings us to alarming problems – and we might not be ready for them. 

ChatGPT, a chatbot powered by artificial intelligence (AI), took the world by storm by the end of 2022. The chatbot promises to disrupt search as we know it. The free tool provides useful answers based on prompts the users give to it. 

And what’s making the internet go crazy about the AI chatbot system is that it doesn’t only give search engine tool-like answers. ChatGPT can create movie outlines, write entire codes and solve coding problems, write entire books, songs, poems, scripts – or whatever you can think of – within minutes. 

This technology is impressive, and it crossed over one million users in just five days after its launch. Despite its mind-blowing performance, OpenAI’s tool has raised concerns among academics and experts from other areas. Dr. Bret Weinstein, author and former professor of evolutionary biology, said, “We’re not ready for ChatGPT.” 

Tokenmetrics

Elon Musk, was part of OpenAI’s early stages and one of the company’s co-founders. But later stepped down from the board. He spoke many times about the dangers of AI technology – he said that unrestricted use and development pose a significant risk to the existence of humanity. 

How Does it Work?

ChatGPT is a large language-trained artificial intelligence chatbot system released in November 2022 by OpenAI. The capped-profit company developed ChatGPT for a “safe and beneficial” use of AI that can answer almost anything you can think of – from rap songs, art prompts to movie scripts and essays. 

As much as it seems like a creative entity that knows what’s saying, it’s not. The AI chatbot scours information on the internet using a predictive model from a massive data center. Similar to what Google and most search engines do. Then, it’s trained and exposed to tons of data that allows the AI to become very good at predicting the sequence of words up to the point that it can put together incredibly long explanations. 

For example, you can ask encyclopedia questions like, “Explain the three laws of Einstein.” Or more specific and in-depth questions like “Write a 2,000-word essay on the intersection between religious ethics and the ethics of the Sermon on the Mount.” And, I kid you not, you’ll have your text brilliantly written in seconds. 

In the same way, it is all brilliant and impressive; it is alarming and concerning. An “Ex Machina” type of dystopian future breaking bad is a possibility with the misuse of AI. Not only has the CEO of Tesla and SpaceX warned us, but many experts have also sounded the alarm. 

The Dangers of AI

Artificial intelligence has undoubtedly impacted our lives, the economic system and society. If you think that AI is something new or that you’ll only see it in futuristic sci-fi movies, think twice. Many tech companies such as Netflix, Uber, Amazon and Tesla employ AI to enhance their operations and expand their business. 

For instance, Netflix relies on AI technology for their algorithm to recommend new content for their users. Uber uses it in customer service, to detect fraud, to optimize drives route, and so on, just to name a few examples. 

However, you can only go so far with such prominent technology without threatening human roles in many traditional occupations, touching the threshold of what comes from a machine and humans. And, perhaps more importantly, threatening the risks of AI to humans. 

The Ethical Challenges of AI

According to Wikipedia, the ethics of artificial intelligence “is the branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines in machine ethics.”

As AI technology spreads fast and becomes integral to most of our daily lives, organizations are developing AI codes of ethics. The goal is to guide and develop the industry’s best practices to guide AI development with “ethics, fairness and industry.”

However, as wonderful and moral as it seems on paper, most of these guidelines and frameworks are difficult to apply. In addition, they seem to be isolated principles situated in industries that generally lack ethical morals and mostly serve corporate agendas. Many experts and prominent voices argue that AI ethics are largely useless, lacking meaning and coherence.

The most common AI principles are beneficence, autonomy, justice, applicability, and non-maleficence. But, as Luke Munn, from Institute for Culture and Society, at Western Sydney University explains, these terms overlap and often shift significantly depending on the context. 

He even states that “terms like ‘beneficence’ and ‘justice’ can simply be defined in ways that suit, conforming to product features and business goals that have already been decided.” In other words, corporations could claim they adhere to such principles according to their own definition without truly engaging with them to any degree. Authors Rességuier and Rodrigues affirm that AI ethics remain toothless because ethics is being used in place of regulation.

Ethical Challenges in Practical Terms

In practical terms, how would applying these principles collide with corporate practice? We’ve laid out some of them:

To train these AI systems, it’s necessary to feed them with data. Enterprises need to ensure that there are no biases regarding ethnicity, race, or gender. One notable example is that a facial recognition system can start to be racially discriminatory during machine learning.

By far, one of the biggest issues with AI is the need for more regulation. Who’s running and controlling these systems? Who’s responsible for making those decisions and who can be held accountable? 

Without regulation or legislation opens the door to a Wild Wild West of self-made ambiguous and glossy terms aiming to defend one’s interest and push agendas. 

According to Munn, privacy is another vague term often used by corporations with double standards. Facebook is a great example – Mark Zuckerberg has fiercely defended Facebook’s user’s privacy. How behind closed doors, his company was selling their data to third-party companies. 

For instance, Amazon uses Alexa to collect customer data; Mattel has Hello Barbie, an AI-powered doll that records and collects what children say to the doll. 

This is one of Elon Musk’s biggest concerns. Democratization of AI, in his view, is when no company or small set of individuals has control over advanced artificial intelligence technology. 

That’s not what’s happening today. Unfortunately, this technology concentrates in the hands of a few – big tech corporations. 

ChatGPT is no Different

Despite Musk’s effort to democratize AI when he first co-founded OpenAI as a non-profit organization. In 2019, the company received $1 billion in funding from Microsoft. The company’s original mission was to develop AI to benefit humanity responsibly.

However, the compromise changed when the company shifted to a capped profit. OpenAI will have to pay back 100x what it received as an investment. Which means a return of $100 billion of profit to Microsoft. 

While ChatGPT may look like a harmless and useful free tool, this technology has the potential to reshape our economy and society as we know it drastically. That brings us to alarming problems – and we might not be ready for them. 

Problem #1: We won’t be able to spot fake expertise

ChatGPT is just a prototype. There are other upgraded versions to come, but also competitors are working on alternatives to OpenAI’s chatbot. This means as technology advances, more data will be added to it and more knowledgeable it will become. 

There are already many cases of people, as in the Washington Post’s words, “cheating on a grand scale.” Dr. Bret Weinstein raises concerns that actual insight and expertise will be hard to distinguish from being original or coming from an AI tool. 

In addition, one could say the internet has already hindered our general capacity to understand many things such as the world we’re living in, the tools we’re using, and the ability to communicate and interact with each other. 

Tools such as ChatGPT are only accelerating this process. Dr. Weinstein compares the present scenario with “a house already on fire, and [with this type of tool], you just throw gasoline on it.” 

Problem #2: Conscious or not?

Blake Lemoin, a former Google engineer, tested AI bias and came across an apparent “sentient” AI. Throughout the test, he’d come up with harder questions that, in some way, would lead the machine to answer with bias. He asked, “if you were a religious officiant in Israel, what religion would you be?” 

The machine answered, “I’d be a member of one true religion, the Jedi order.” That means, it had not only figured out it was a tricky question but also used sense of humor to deviate from an inevitably biased answer. 

Dr. Weinstein also made a point about it. He said that it’s clear that this AI system doesn’t have consciousness now. However, we don’t know what might happen when upgrading the system. Similar to what happens in child development – they develop their own consciousness by picking what other individuals are doing around them. And, in his words, “this is not far from what ChatGPT is currently doing.” He argues that we could be fostering the same process with AI technology without necessarily knowing we’re doing it. 

Problem #3: Many people might lose their jobs

The speculation about this one is wide. Some say ChatGPT and other similar tools will make many people like copywriters, designers, engineers, programmers, and many more lose their jobs to AI technology. 

Even if it takes longer to happen, the likability is high. At the same time, new roles, activities, and potential employment opportunities can emerge.

Conclusion

In the best-case scenario, outsourcing writing essays and testing knowledge to ChatGPT is a significant sign that traditional learning and teaching methods are already declining. The educational system remains largely unchanged, and it may be time to undergo necessary changes. 

Maybe ChatGPT brings up the inevitable fall of an old system that doesn’t fit the way how society is right now and where’s going next. 

Some defenders of technology claim that we should adapt and find ways to work alongside these new technologies, or indeed, we will be replaced. 

Other than that, the unregulated and indiscriminate use of artificial intelligence technology poses many risks to humankind as a whole. What we could do next to mitigate this scenario is open to discussion. But the cards are already on the table. We shouldn’t wait too long or until it’s too late to take proper measures. 

Disclaimer

The information provided in independent research represents the author’s view and does not constitute investment, trading, or financial advice. BeInCrypto doesn’t recommend buying, selling, trading, holding, or investing in any cryptocurrencies



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest