From the Perspective of Data Protection and Cybersecurity: Increasing Need for Regulation in Artificial Intelligence

Insights -

The rapid development of artificial intelligence (AI) technologies and the widespread use of generative artificial intelligence systems also bring the need to regulate artificial intelligence. The steps taken and the issues discussed in European Union member states within the scope of the European Union’s AI Act have also begun to clarify the legal framework applicable to those systems and applications. Despite the lack of any specific regulation applicable to AI in Türkiye, there are developments that are worth considering primarily from the perspective of data protection and cybersecurity.

Protection of Personal Data – Generative Artificial Intelligence Guideline

The Personal Data Protection Authority (the “DPA”) emphasized that although generative AI applications offer innovative opportunities and benefits, they also bring along certain risks from ethical, legal and societal perspectives. Accordingly, the DPA published a Guideline on Generative Artificial Intelligence and Protection of Personal Data (the “Generative Artificial Intelligence Guideline”) on 24 November 2025 in order to provide orientation and guidelines for data controllers which develop and use those applications/systems.

As is known, the DPA closely follows the issue in parallel with developments around the world relating to AI. You may also access our article here on the recommendations previously published by the DPA regarding the protection of personal data in the field of AI.

This time, the DPA focuses on AI applications trained on large-scale data sets and capable of generating content in different formats such as text, image, video, audio or software code in response to prompts or commands entered by the user and aims to raise awareness on how those AI models operate and how they are able to generate such outputs. In the Generative Artificial Intelligence Guideline, the DPA explains in detail the operating principles of those systems and also clarifies how the risks brought by these systems can be addressed in light of data protection rules. The primary risks noted by the DPA are mainly “hallucinations”, inconsistent outputs, prejudice and biased outputs, data privacy and security concerns, violation of intellectual property rights, “deep fake” and manipulative contents.

The data controllers are advised to pay attention to the Generative Artificial Intelligence Guideline and the inputs provided by the DPA, some of which may be summarized as follows:

  • If Turkish residents’ personal data are being used or involved in the process during training, evaluation, monitoring and use of generative AI systems, it should be reminded that this processing activities are subject to the Turkish Personal Data Protection Law (the “KVKK”). Even if AI models do not directly target personal data, those models may process personal data incidentally or indirectly. Furthermore, even in the event that only anonymous or anonymized data is used during processes such as design, development and testing of those systems, anonymization itself is also a processing activity, and it shall also be demonstrated whether the data asserted to have been anonymized is truly anonymous by using technical methods and objective criteria.
  • Although it is stated that the complex structure and multi-layered functioning of generative AI systems make it difficult to identify the data controller and the data processor, it is emphasized that this identification is important in order to determine the obligations arising from the KVKK, and it was evaluated how this identification can be made with examples. According to the Guideline, in the identification of data controllers, instead of the contracts between the parties, the actual control exercised by the parties over the data processing activities carried out with those systems should be taken as basis, and it should be determined whether the actual practices are in line with these control and decision-making powers.
  • Compliance with general principles of the KVKK and the legal grounds for processing regulated under the KVKK was highlighted. In this context, it was emphasized that personal data processing activities carried out throughout the lifecycle of generative AI systems must be conducted in accordance with the general principles of the KVKK. Practical examples were also provided regarding how each of the principles can be addressed from the perspective of the lifecycle of those AI systems. For each stage such as the development, operation and output use of AI systems, it will be necessary to separately determine which processing ground is relied upon. Indeed, processes such as processing of personal data provided by users for the purpose of running/using the model, the use of the relevant data for development of the model, the use of the outputs generated by the model for the purpose of personalizing interaction with the user, and the use of the outputs generated by the model for the purpose of developing the model should each be evaluated as separate data processing activities and require separate legal ground analyses. The Guideline contains many practical examples on how to conduct these legal ground analyses.
  • Compliance with cross-border data transfer restrictions is also addressed as a critical issue in the Guideline.
  • The Guideline emphasizes that transparency in the use of generative AI, the effective exercise of data subject rights and the ensuring of data security constitute the basis of both legal compliance and societal trust. Privacy notice obligations must be fulfilled in a clear and separate manner. Individuals must be provided with the right to object and reassessment in processing activities based on automated decision-making and these processes must be subject to human intervention. A privacy-by-design approach must be adopted, and risk-based security measures must be implemented. In this context, it was indicated that data controllers should proactively oversee their systems using methods such as “red teaming”, privacy-enhancing technologies (PET) and impact assessments.
  • The benefits of conducting impact assessments when relying on the legal ground of legitimate interest for the processing of personal data in AI systems were particularly underscored. It is recommended that the data controllers assess in advance matters such as whether the processing to be carried out with AI applications is necessary, what benefit it provides to the data controller, whether the same benefit can be achieved without processing by means of generative AI, the reasonable expectations of data subjects within the scope of their privacy and the possible adverse effects that may arise as a result of data processing through generative AI. In this respect, even though it is not provided for in the legislation, the DPA emphasized again the importance of impact assessments in data processing practices based on legitimate interest.

Cybersecurity

With the appointment of the Cyber Security President, it is expected that the organization of the Cyber Security Presidency will also be completed soon and they will also focus on AI. As is known, the Cyber Security Law was published in the Official Gazette dated 19 March 2025 and entered into force. The aim of this law is to ensure the effective implementation of national cybersecurity policies, to increase the resilience of public institutions and critical infrastructures, to integrate technological developments into processes, to monitor and eliminate cyber incidents from a central perspective, to implement deterrent sanctions, to regulate standardization and certification processes and to increase penalties for cybercrimes.

Pursuant to the Cyber Security Law, those who provide services, collect data, process data and conduct similar activities using information systems are obliged to certain obligations, including but not limited to (i) promptly and in a timely manner providing to the Presidency data, information, document, hardware, software and any other contribution requested within the scope of the duties and activities of the Presidency relating to cybersecurity, (ii) taking the measures prescribed by legislation for the purpose of national security, public order or the proper execution of public service relating to cybersecurity, (iii) reporting without delay to the Presidency the vulnerabilities or cyber incidents identified in the area where they provide services, (iv) procuring the cybersecurity products, systems and services to be used in critical infrastructures from cybersecurity experts, manufacturers or companies authorized and certified by the Presidency, (v) showing importance for compliance with policy, strategy, action plans and other regulatory acts issued by the Presidency developed for the purpose of increasing cyber maturity and taking the necessary security measures. Therefore, the regulations to be made by the Cyber Security Presidency will also be extremely important in terms of development, operation and use of AI systems along with procurement and deployment of AI systems. Accordingly, it is extremely important to closely follow the upcoming regulations of the Cyber Security Presidency.

New Artificial Intelligence Law Proposal

In addition to the first artificial intelligence bill, which we evaluated within the scope of our article titled “Legislative Developments for Artificial Intelligence in Türkiye” and which adopts an approach in parallel with AI Act regulations, in a new law proposal that was recently submitted, it was emphasized that the risks brought by artificial intelligence systems should be addressed within the scope of various laws, primarily the Turkish Penal Code, and it was proposed to make regulations directly referring to AI in many laws but focusing on sanctions.

Priority issues addressed in the bill include the following: Regulating as an offence the ‘inducing of AI systems to commit a crime’, establishing the criminal liability of those who design or train the AI system in a way that enables the commission of a crime, blocking access to content generated by AI that violates personality rights, threatens public security or is false (deep fake), and making regulations that allow for urgent decisions to be taken when necessary, ensuring that data sets to be used in AI applications are compliant with the principles of anonymity, prohibiting discrimination and legitimacy, and preventing risks of misinformation, manipulation and hallucination. You may review the full text of this new bill (available only in Turkish) here.

Although this law bill is still being discussed in the Committees on Industry, Trade, Energy, Natural Resources, Information and Technology, it is not expected to be enacted in its current form. It may be stated that in works related to the AI law, the need to establish a detailed framework in parallel with the regulations in the European Union through ensuring broader stakeholder participation and obtaining opinions from experts in various sectors still continues.

Evaluation

Benefiting safely and responsibly from opportunities offered by AI systems necessitates an approach that protects individuals’ personal data, respects human rights, and is transparent and accountable. In addition, matters such as human oversight, implementation of privacy impact assessments, and privacy-by-design are already subject to personal data protection and cybersecurity legislation to the extent those are applicable.

However, although the scope and entry into force of the AI Act also bring certain debates in the EU, it can be stated that there is currently no approach fully in parallel with the AI Act in Türkiye, and that at this stage, the matter is being addressed only through documents such as guides, policies, strategies and action plans. The bills prepared by members of parliament also show that it is necessary to primarily indicate certain risks; however, the bills currently submitted may be characterized as sanction-focused regulations aimed at preventing certain legal problems in specific matters without putting in place a special holistic regulation in Türkiye on this matter. Undoubtedly, there is an increasing need for artificial intelligence regulations that are specific to Türkiye.

Special thanks to Ufuk Ege Uçar and Mina Sarı for their contributions.