Generative AI is amazing, right? You can ask it for some tips on how to write the perfect cover letter. In just seconds, you’ll get tips that will help you improve your cover letter. Talk about convenience. But as they say, “Too much of a good thing can be a bad thing.”
Generative AI can be safe to use, as long as it’s used wisely. The 3 privacy and cybersecurity risks include: data privacy issues, bias, and cybersecurity threats.
According to a survey from the Pew Research Center, 55% of Americans regularly use AI. Plus, over 70% of businesses are either using or exploring the use of AI. These numbers are not surprising because AI is undeniably beneficial.
The Benefits of Generative AI
ChatGPT, Gemini, Claude, and Meta AI; these are just some of the top generative AI tools. They have transformed the way we create, learn, and work. For businesses, AI helps automate tasks, increasing overall productivity.
For some artists, generative AI is like a creative support. It helps them visualize, brainstorm ideas, and improve or fine-tune designs.
For students, AI can help in creating study plans, practicing problems, and discovering various learning styles. Plus, some generative AI tools have a deep research feature, which can assist students when researching.
And for general consumers, generative AI can offer personalized shopping recommendations, budgeting tips, curated entertainment content, and even customized fitness plans.
According to studies, generative AI can also help users in creating strong and unique passwords, reducing the chances of illegal access. However, we don’t recommend using AI tools to create your passwords. Using password managers is a better and more secure way to create your passwords.
Overall, it’s not an exaggeration to say that AI has become an invaluable tool in our lives. But it also exposes us to new risks and dangers to our privacy.
3 Dangers and Risks of Gen AI
1) Data Privacy Issues
Every time you interact with generative AI (from chatting to generating images), your data is collected. Even from the time you create an account, you’re already providing your information. And whatever you feed the AI, the developers will collect it.
While data collection helps developers improve their AI tools, it poses a risk to data privacy. This is especially true since many people tend to overshare without realizing it. For example, some people ask generative AI about their symptoms, which can be very risky. Once an AI developer gets breached, bad actors can gain access to your health information.
In theory, the information you share with generative AI could be used by companies for targeted advertising and profiling. For example, if you asked ChatGPT about the best fried chicken in your area, the developers could potentially share your inquiry with third parties. This increases the chances of you receiving more ads from big fast-food names.
What’s worse is that even if you don’t use AI, developers can still collect your data elsewhere. For instance, according to IBM, developers often do web scraping, collecting user data from different websites and social media platforms.
2) Bias
Another risk is user bias. These generative AI tools learn from data and are created by humans (who naturally carry biases that can manifest in the AI’s outputs). For example, when you ask about career ideas based on your personality, the AI could unintentionally give you responses based on biased hiring data.
Another example of AI bias is through image generation. Sometimes, when you ask AI to generate images, it provides you with defaults because of the data that was used. For example, when you ask for a photo of people with different professions, it gives you photos of white men in different uniforms, highlighting gender stereotypes.
3) Cybersecurity Threats
Lastly, generative AI models face cybersecurity threats. The companies that collect your data are hot targets for hackers who can sell your data on the dark web. Some of the AI companies that have been hacked are:
- OpenAI
- Anthropic
- Clearview AI
- TaskRabbit
Aside from data breaches, bad actors are also using generative AI to power up their cyber-attacks. For example, scammers can use chatbots to create grammatically flawless phishing emails, making it harder to detect. As per CSO, cybercriminals are now using AI to create CAPTCHA pages, copying verification pages that bypass security filters and capture user information. Plus, they use image generators to create highly convincing content to deceive victims.
What Users Can Do
Now, what can users do? Does this mean that you have to stop using generative AI? It is more effective to stop using Gen-AI altogether. But if AI is already part of your studies or work, there are still things you can do to protect your privacy.
- Pick The Right Tools: Make sure that the AI tool you’ll use is privacy friendly. It should be transparent about its data practices and have a good track record when it comes to data security.
- Use Strong Security Measures: While there’s not much you can do to prevent a company from being breached, you can still take steps to protect your online accounts. Make sure your passwords are strong and unique. Also, use two-factor authentication and a password manager.
- Balance Convenience with Caution: Make it a habit to use AI only for repetitive tasks like summarizing long-form content and researching. Avoid using chatbots when it comes to sensitive stuff, like your health, financial status, and explicit content.
- Opt Out of Data Collection: See if the AI tool that you’re using is offering an option to opt out of data collection. If it does, do it quickly to protect your privacy while using the tool.
- Stay Informed: Lastly, make sure you’re updated when it comes to cybersecurity and AI developments. This will help you know what to be cautious of and implement preventative measures.
Conclusion
Overall, it’s clear that generative AI is a key innovation to society, but it also raises challenges–particularly dangers in privacy and cybersecurity.
The takeaway is that you should use AI responsibly. Don’t let it take over your skills and privacy.
Frequently Asked Questions
What are the limitations or challenges of AI in cybersecurity?
One of the key limitations of AI in cybersecurity is the biases. Developers use assumptions and data to train artificial intelligence. This can result in biases in the cybersecurity field. This includes false positives or negatives, where threats are not flagged, while legitimate activities are scrutinized.
What is the 30% rule in AI?
The 30% rule in AI is the concept of letting AI handle about 70% of the repetitive work, while humans handle the remaining 30%. This rule is proposed to ensure that artificial intelligence won’t replace human creativity, skills, and judgment.



