📢 Gate Square #Creator Campaign Phase 1# is now live – support the launch of the PUMP token sale!
The viral Solana-based project Pump.Fun ($PUMP) is now live on Gate for public sale!
Join the Gate Square Creator Campaign, unleash your content power, and earn rewards!
📅 Campaign Period: July 11, 18:00 – July 15, 22:00 (UTC+8)
🎁 Total Prize Pool: $500 token rewards
✅ Event 1: Create & Post – Win Content Rewards
📅 Timeframe: July 12, 22:00 – July 15, 22:00 (UTC+8)
📌 How to Join:
Post original content about the PUMP project on Gate Square:
Minimum 100 words
Include hashtags: #Creator Campaign
PwC: Decoding the Security Risks and Challenges Raised by Generative AI
Source: PwC
The impact of the wave of generative AI products on enterprise security
A number of generative AI products such as ChatGPT are expected to free employees from tedious and repetitive tasks, and employees can devote more time and energy to work that is more valuable to the company. However, from the perspective of enterprise security, generative AI products may introduce new risks, which is a double-edged sword for enterprises.
01 The threat of cyber attacks has intensified
Generative AI lowers the barrier of entry for hackers. Hackers can use generative AI to quickly integrate various cyber attack methods, conveniently "weaponize" the attack methods, and possibly achieve innovative attack methods. PwC's cybersecurity team has clearly observed that social engineering attacks such as phishing emails received by customers have increased significantly in recent months, which coincides with the widespread use of ChatGPT; ChatGPT has also been found to be more deceptive for batch generation phishing sites.
02 Enterprise Sensitive Data Leakage
Security experts have concerns about the possible leakage of enterprise sensitive data, and improper input behavior of employees may cause sensitive data to remain in the database of generative AI products. OpenAI's privacy policy shows that the content entered by users when using ChatGPT will be used to train its AI algorithm model. Furthermore, ChatGPT has been exposed to serious security issues. Due to vulnerabilities in the open source library, some users can see the titles of other users' conversation history. At present, many technology giants such as Amazon and Microsoft have reminded employees not to share sensitive data with ChatGPT.
03 Generative AI Poisoning Risk
Training data poisoning is a common security threat faced by generative AI. Malicious data will have a negative impact on the results of AI algorithms. If operations management relies heavily on generative AI, wrong decisions may be made on critical issues. On the other hand, generative AI also has potential "bias" problems. Similar to the "100 Bottles of Poison for AI" activity participated by many well-known experts and scholars, in the process of developing or using AI, enterprises should actively respond to the threat of AI poisoning in a strategic manner.
04Privacy protection issue
The generative AI pre-training stage requires a large amount of data collection and mining, which may include the private information of many customers and employees. If generative AI cannot properly protect and anonymize these private information, it may lead to privacy leaks, and these private information may even be abused to analyze and speculate on user behavior. For example, the mobile phone application market is full of image generation software. Users only need to upload multiple avatars of themselves, and the software can generate composite photos of different scenes and themes. However, how software companies use the avatars uploaded by these users, whether it will bring privacy protection and other security risks, is worthy of attention and response.
05 Enterprise Security Compliance Risk
In the absence of effective management measures, the mass adoption of generative AI products may lead to security compliance issues, which is undoubtedly a huge challenge for enterprise security managers. The Interim Measures for the Administration of Generative Artificial Intelligence Services, which was reviewed and approved by the Cyberspace Administration of China and approved by six departments including the National Development and Reform Commission and the Ministry of Education, has been announced a few days ago and will come into force on August 151 2. It puts forward basic requirements from the perspectives of technology development and governance, service specification, supervision and inspection, and legal responsibility, and establishes a basic compliance framework for the adoption of generative AI.
Learn about real-world generative AI security threat scenarios
After understanding the security risks introduced by generative AI, the following will analyze how the problem arises in a more specific security threat scenario, and explore the subtle impact of generative AI on enterprise security.
01 Social Engineering Attack
World-renowned hacker Kevin Mitnick once said: "The weakest link in the security chain is the human element." The usual tactic of social engineering hackers is to use sweet words to lure corporate employees, and the emergence of generative AI has greatly facilitated social engineering attacks. Generative AI can generate highly realistic fake content, including fake news, fake social media posts, fraudulent emails, and more. These fake content may mislead users, spread false information, or trick employees into making wrong decisions. Generative AI could even be used to synthesize sound or video to appear real, which could be used to commit fraud or falsify evidence. Baotou Municipal Public Security Bureau Telecommunications Network Crime Investigation Bureau released a case of telecommunications fraud using intelligent AI technology. Criminals defrauded 4.3 million yuan in 10 minutes through AI face-changing technology.
02 Unconscious violations by employees
Many technology manufacturers have begun to actively lay out the generative AI track, integrating a large number of generative AI functions into products and services. Employees may inadvertently use generative AI products without carefully reading the user terms of use before using them. When enterprise employees use generative AI, they may input content containing sensitive information, such as financial data, project information, company secrets, etc., which may lead to the leakage of enterprise sensitive information. To prevent the disclosure of sensitive enterprise information by generative AI, enterprises need to take comprehensive security measures: including enhancing data leakage protection technology and restricting employees' online behavior; at the same time, they need to provide security training for employees to improve data security and confidentiality vigilance wait. Once an employee's violation is discovered, the company needs to immediately assess the impact and take timely action.
03 Inevitable Discrimination and Prejudice
The reason why generative AI may have discrimination and bias is mainly due to the characteristics of its training data and model design. Training data from the Internet reflects real-world biases, including aspects such as race, gender, culture, religion, and social status. During the processing of training data, there may not be enough screening and cleaning measures to exclude biased data. Likewise, there may not be enough attention paid to reducing bias in model design and algorithm selection for generative AI. Algorithmic models pick up biases in the training data as they learn, leading to similar biases in the generated text. While eliminating bias and discrimination from generative AI is a complex challenge, there are steps businesses can take to help mitigate them3.
04 Compromise on privacy protection
In the process of using generative AI products, in order to pursue efficient automation and personalized services, enterprises and individuals may make some compromises in terms of privacy protection, allowing generative AI to collect some private data. In addition to users disclosing personal privacy content to the generative AI during use, the generative AI may also analyze the content input by the user, and use algorithms to speculate on the user's personal information, preferences or behaviors, further infringing on the user's privacy. Data desensitization and anonymization is a common privacy protection measure, but it may lead to the loss of some information of the data, thereby reducing the accuracy of the generation model. A balance needs to be found between personal privacy protection and the quality of generated content. As a generative AI provider, it should provide users with a transparent privacy policy statement to inform them of the collection, use and sharing of data, so that users can make informed decisions.
05 Major Trends in Regulatory Compliance
From the current point of view, the legal compliance risks faced by generative AI mainly come from "content violations of laws and regulations" and "intellectual property infringement". In the absence of supervision, generative AI may generate illegal or inappropriate content, which may involve illegal or illegal elements such as insults, defamation, pornography, violence; on the other hand, generative AI may generate content based on existing copyrighted content, which may result in intellectual property infringement. Enterprises using generative AI must conduct compliance reviews to ensure that their applications comply with relevant regulations and standards and avoid unnecessary legal risks. Enterprises should first assess whether the products they use comply with the provisions of the "Interim Measures for the Administration of Generative Artificial Intelligence Services". At the same time, they need to pay close attention to the updates and changes of relevant laws and regulations, and make timely adjustments to ensure compliance. When enterprises use generative AI with suppliers or partners, they need to clarify the rights and responsibilities of each party, and stipulate corresponding obligations and restrictions in the contract.
How Individuals and Businesses Can Proactively Address the Risks and Challenges of Generative AI
Individual users and corporate employees need to realize that while enjoying the various conveniences brought by generative AI, they still need to strengthen the protection of their personal privacy and other sensitive information.
01Avoid disclosure of personal privacy
Before using generative AI products, employees should ensure that the service provider will reasonably protect the privacy and security of users, carefully read the privacy policy and user terms, and try to choose a reliable provider that has been verified by the public. Try to avoid entering personal privacy data during use, and use virtual identities or anonymous information in scenarios that do not require real identity information. Any possible sensitive data needs to be obfuscated before input. On the Internet, especially social media and public forums, employees should avoid excessive sharing of personal information, such as names, addresses, phone numbers, etc., and not easily expose information to publicly accessible websites and content.
02 Avoid generating misleading content
Due to the limitations of the technical principles of generative AI, the results will inevitably be misleading or biased. Industry experts are also constantly studying how to avoid the risk of data poisoning. For important information, employees should verify it from multiple independent and trusted sources, and if the same information appears in only one place, more investigation may be required to confirm its authenticity. Find out whether the information stated in the results is supported by solid evidence. If there is no substantive basis, you may need to be skeptical of the information. Identifying the misleading and bias of generative AI requires users to maintain critical thinking, constantly improve their digital literacy, and understand how to use its products and services safely.
Compared with the open attitude of individual users, enterprises are still waiting and watching for generative AI. The introduction of generative AI is both an opportunity and a challenge for enterprises. Enterprises need to carry out overall risk considerations and make some response strategic deployments in advance. PwC recommends that enterprises can consider starting related work from the following aspects.
01 Enterprise network security assessment clarifies defense shortcomings
The number one challenge facing enterprises remains how to defend against the next generation of cyberattacks brought about by generative AI. For enterprises, it is imperative to assess the current network security status, determine whether the enterprise has sufficient security detection and defense capabilities to deal with these attacks, identify potential network security defense vulnerabilities, and take corresponding reinforcement measures to actively deal with them. In order to achieve the above goals, the PwC cyber security team recommends that enterprises conduct offensive and defensive confrontation drills based on these real cyber attack threat scenarios, that is, cyber security "red and blue confrontation". From different attack scenarios, we can discover the possible shortcomings of network security defense in advance, and repair the defense flaws comprehensively and systematically, so as to protect the enterprise's IT assets and data security.
02 Deploy the internal generative AI test environment of the enterprise
To understand the technical principles of generative AI and better control the results of generative AI models, enterprises can consider establishing their own generative AI sandbox testing environment internally, so as to prevent uncontrollable generative AI Potential threats of AI products to enterprise data. By testing in an isolated environment, companies can ensure that accurate and inherently bias-free data is available for AI development, and can more confidently explore and evaluate model performance without risking sensitive data exposure. An isolated test environment can also avoid data poisoning and other external attacks on generative AI, and maintain the stability of the generative AI model.
03Establish a risk management strategy for generative AI
Enterprises should incorporate generative AI into the target scope of risk management as soon as possible, and supplement and modify the risk management framework and strategy. Conduct risk assessments for business scenarios using generative AI, identify potential risks and security vulnerabilities, formulate corresponding risk plans, and clarify countermeasures and responsibility assignments. Establish a strict access management system to ensure that only authorized personnel can access and use generative AI products approved by the enterprise. At the same time, the use behavior of users should be regulated, and enterprise employees should be trained on generative AI risk management to enhance employees' security awareness and coping capabilities. Entrepreneurs should also adopt a privacy by design approach when developing generative AI applications, so that end users know how the data they provide will be used and what data will be retained.
04 Form a dedicated generative AI research working group
Enterprises can gather professional knowledge and skills within the organization to jointly explore the potential opportunities and risks of generative AI technology, and invite members who understand related fields to participate in the working group, including data governance experts, AI model experts, business domain experts, and legal compliance experts wait. Enterprise management should ensure that working group members have access to the data and resources needed to allow them to explore and experiment, while encouraging working group members to experiment and validate in test environments to better understand the potential opportunities and opportunities of generative AI. Business application scenarios can balance the risks in order to obtain the benefits of applying advanced technology.
Conclusion
The development and application of generative AI is having a major impact on technology, which may trigger a new revolution in productivity. Generative AI is a powerful technology that combines advances in fields such as deep learning, natural language processing, and big data to enable computer systems to generate content in human language. Enterprises and employees should control this powerful technology and ensure that its development and application are carried out within the framework of law, ethics and social responsibility. This will be an important issue in the future. PwC’s AI expert team is committed to researching how to help companies establish a complete generative AI management mechanism4, so that companies can apply and develop this emerging technology with more peace of mind, which will help companies join the In the wave of AI technology, generative AI can provide enterprises with sustainable competitive advantages.
Note
Interim Measures for the Administration of Generative Artificial Intelligence Services_Departmental Documents of the State Council_ChinaGov.com
Innovation and Governance: Interpreting the latest regulatory trends in generative artificial intelligence
Understanding algorithmic bias and how to build trust in AI
Managing generative AI risks: PwC
Disclaimer: The information in this article is for general information purposes only and is not to be considered exhaustive, nor does it constitute legal, tax or other professional advice or services from PwC. The member institutions of PwC shall not be liable for any loss caused by any subject due to the use of the content of this article.