Recently, a technology that uses artificial intelligence to interact with people, known as ChatGPT, was launched. The creation of this tool has generated new debates about data security, transparency, and how it can impact people's lives.
ChatGPT uses artificial intelligence to answer people's questions on various topics through its database.
According to Swiss bank UBS, ChatGPT reached 100 million monthly active users in January 2023, just two months after its launch, making it the fastest growing app.
Compared to social networks, Tik Tok took nine months to reach 100 million active people on the platform and Instagram took two years to reach this number.
In this way, we can observe the speed of the tool's growth.
The new technology can benefit people's lives, bringing practicality in daily life, work and studies. However, the use of the platform can bring risks to data security.
There are some cases involving data leakage in the tool, recently there was the leak of sensitive information about people's card data.
Continue here and learn more about ChatGPT, its risks and how to protect yourself from them.
What are the risks of this technology to society?
There are some questions that the creators of the new technology are facing, such as the risks to the security of users' data and the lack of transparency of the tool.
Because of this, some countries are taking preventive measures to ensure the security and privacy of their population's data.
Due to a possible breach in the use of user data, Italy has banned the use of the technology in the country. The Italian regulatory agency reported that there is no justification for the massive storage of people's data.
The chatbot's owner, OpenAI, must meet the agency's requirements to operate in the country again. So far, there is no news that the tool is back up and running in Italy.
Another point that is being debated among people is whether this technology can replace humans in workplaces, there are different views on this topic.
Some people agree that it can replace the human mind in the future and others maintain that this is not possible.
However, it is necessary to wait for the platform's next steps to understand the different scenarios that may arise with the transformations.
Besides the challenges mentioned above, this technology can become an ally for hackers. The tool can help hack into the operating systems of organizations and implement malicious programs such as malware.
It also helps in the creation of fake web pages, promotions and offers to steal personal and sensitive data from visitors. Thus, the misuse of technology can directly impact people's lives.
Cases of ChatGPT misuse
ChatGPT can make people's lives easier, but unaware use of this tool can cause some risks for people.
A brand of electronic appliances experienced three leaks of sensitive information after its employees entered data into the chatbot. Although the use of the tool is allowed within the organization, leaders have asked for increased attention and responsibility.
Despite the leak, the company operated normally and without financial losses. However, the misuse of technology can compromise the reputation of the organization.
A German journalist used ChatGPT to simulate an interview with former Formula 1 driver Michael Schumacher, publishing the fake story in the newspaper she worked for.
The former driver's family announced their intention to denounce the newspaper for publishing a fake news story about him, the repercussions were negative and the journalist was fired.
Soon after, the newspaper published a note apologizing to his family and fans.
Another case that occurred due to the misuse of technology was when a lawyer wrote a petition using artificial intelligence, he was fined by the agency responsible for the case.
Can technology create misinformation among people?
According to Gordon Crovitz, Co-CEO of NewsGuard, ChatGPT may be the most powerful tool for spreading disinformation that has ever existed on the Internet.
The technology's information bank is fed with different content that is present in the digital world. Because of this, we do not know if there is a prior verification of the facts that are presented to people, and the tool does not show the sources of their answers.
Thus, it is not known whether the answers offered by the technology can be trusted.
Can biased content be produced by ChatGPT?
There is a concern on the part of people about the use of this technology for the negative side, such as reproducing xenophobic, racist and prejudiced content.
In the digital world there are already publications portraying this type of content, but it is worth remembering that this practice may be considered a crime. Therefore, they should not be reproduced in any environment.
For the well-being of people in the digital universe, it is essential that people are aware of and report any such publications.
Phishing attacks
According to the survey conducted by BlackBerry, 48% of the people interviewed believe that successful cyber attacks using ChatGPT may occur in the next 12 months.
Still on the survey, this technology may be able to create phishing emails, without grammatical errors and thus making it difficult for people to identify the cyber threat.
According to Thiago Marques, a cybersecurity specialist, the new version of ChatGPT is able to produce more humanized texts and process larger amounts of data, including links.
In this way, it can create more efficient phishing campaigns that can be carried out with little technical knowledge, and this technology ends up becoming a possible ally of malicious people.
This can contribute to an increase in the number of cases involving phishing attacks.
What are the benefits of using this tool?
ChatGPT works 24 hours a day, seven days a week and it can optimize some processes within organizations, such as taking solutions almost instantly into the corporate environment.
Being used with awareness and responsibility, it can bring great benefits, improving customer service, reducing the waiting time and possibly increasing the satisfaction of people about the service provided.
Within the platform it is possible to obtain information and ask questions about different areas, for example, education, health, finance, and communication.
Finally, artificial intelligence helps you manage tasks and appointments, such as scheduling meetings. What's more, this tool can be integrated across different platforms and devices, making it a versatile solution for everyone.
In the new update of the tool, ChatGPT-4, it is possible to perform image analysis by the chatbot, not just text.
For now, the new update is only available in the paid version, but it is possible to use it on partner platforms that contain the chatbot's API.
With each update of the tool, it becomes more complete for its users, and new risks can arise with new versions.
Therefore, it is critical that there is training on the technology to increase people's awareness of ChatGPT, especially within organizations.
How can organizations ensure its security?
Trends indicate that the use of ChatGPT within organizations can increase, but first people need to be trained and taught about the tool.
This way, employees can be more aware of how to use it in the right way, avoiding exposing sensitive information to the chatbot.
In addition, it is essential that organizations have a platform to send internal communications, notices, and documents for signature to regulate the use of ChatGPT and other company matters.
This way, people will be well informed and trained about the platform.
Another important point is people's awareness of existing cyber threats in the digital world, since the chatbot can help cybercriminals in the application of phishing attacks.
Because of this, people need to know how to identify and protect themselves from the possible attacks that can occur through them.
The PhishX ecosystem makes it possible to send internal communications to everyone within the organization and helps your security team in the process of implementing the awareness program.
PhishX helps with internal communication and awareness raising
To regulate, inform and train people, internal communications need to be delivered to everyone in the organization. PhishX can help your internal communications team with any type of release.
We are a complete ecosystem for people's awareness.
We assist your security team in the process of implementing an awareness program, through reports and data that measure the individual maturity of each person on the team against cyber risks.
In addition, you can create customized phishing campaigns and drills to track and assess your organization's risks. Our Customer Success team guides you through the entire process, helping you build a successful program.
Comments