As automation accelerates, businesses face growing pressure to define authority, protect data and ensure accountability

As artificial intelligence becomes ubiquitous in the commercial life of Cyprus, businesses are beginning to confront the ethical questions that accompany its rapid adoption.

Firms that at once embraced it in its incipient years now learn that speed, scale and efficiency carry responsibilities as consequential as any technological upper hand.

“AI executes authority at a pace no human can match,” said Petros Nearchou, director at the cybersecurity firm Obrela Industries.

“The problem is not AI adoption; it is authority that was never deliberately defined before AI arrived.”

In many local offices, where shared accounts are common and access rights are rarely formalised, AI inherits expansive permissions almost instantly, creating risks that can be imperceptible until they manifest.

Machines process information precisely as authorised, yet the speed with which they process information can expose vulnerabilities that organisations never anticipated.

The General Data Protection Regulation (GDPR) has long shaped how businesses in Cyprus handle personal data.

However, AI introduces challenges the law did not foresee.

GDPR established clear standards for how companies collect, store and process information. Companies must now demonstrate not only that data is collected lawfully, but also that automated systems handle it responsibly, transparently and with full accountability.

The challenge is that machines follow authority structures exactly as they exist, including those never designed with automation in mind.

Nearchou cited the example of a financial services firm in Nicosia that connected ChatGPT to a shared OneDrive account containing seven years of client files and transaction histories.

“The AI did exactly what it was authorised to do,” he said. “The problem was that nobody had considered what it should not be doing.”

The system summarised client agreements and internal strategy notes within hours, working exactly as authorised, and at no point did human judgement filter what was contextually sensitive nor commercially confidential.

For GDPR compliance, companies must ensure that AI respects access limitations and that all data flows, including those through automated systems, are auditable and defensible.

Entrepreneur Andrew Anastasiou, founder of the Cypriot verification platform EnPsema, believes the ethical risks become even more pronounced when companies rely on external cloud-based systems.

“For a Cypriot business, sending sensitive client data to a public cloud AI is essentially handing over your most valuable assets to a black box,” he said. “Once data crosses into those third-party servers, you lose the data sovereignty that the GDPR and the EU AI Act demand.”

Anastasiou argues that the apparent convenience of widely available AI services often conceals deeper risks related to confidentiality and legal responsibility.

AI adoption in Cyprus is particularly pronounced among young people. Eurostat figures from 2025 show that 76.5 per cent of 16 to 24-year-olds have used generative AI tools, placing the island above the EU average of 63.8 per cent.

Professional adoption is lower, largely because many young people have not yet entered the labour market. However, the trend suggests that new generations will expect AI to be part of their daily working lives.

The impact is already visible in sectors such as media, marketing and creative production.

Automated transcription, translation, image recognition and basic editing tasks now take minutes rather than hours. Corporate teams are also finding efficiencies in areas such as contract review and compliance checks, where systems can scan hundreds of pages and highlight relevant clauses in seconds.

For many businesses, this transformation has reduced administrative burdens and freed time for strategic work.

Tasks that once required extensive manual effort are now handled by algorithms capable of analysing large volumes of material at speed.

Yet the acceleration raises ethical questions, for who benefits and who bears the cost of these efficiencies?

Yet this acceleration raises ethical questions: who benefits, and who bears the cost?

History suggests that technological change reshapes employment rather than eliminating it. The introduction of automated teller machines (ATMs), for example, displaced traditional bank tellers but also created new roles in technical support and digital banking.

In Cyprus, where small and medium-sized enterprises form the backbone of the economy, workforce planning is essential.

Businesses must balance innovation without undermining the stability of the workforce that sustains them.

Ethical AI adoption is as much about governance and accountability as it is about speed.

Another area being reshaped is reputation management.

Large language models increasingly synthesise information about companies and individuals from across the internet, influencing perceptions before a potential client or investor visits a company’s website.

Reputation has become a product of aggregated digital narratives rather than individual search results. According to the reputation management firm Status Labs, this represents a structural shift.

Traditional online reputation management focused on search engine rankings. AI systems now generate summaries from multiple sources simultaneously.

As a result, a single unresolved complaint or outdated piece of information can shape how a business is portrayed.

For firms operating in Cyprus’s financial and professional services sectors, where credibility is critical, this dynamic carries significant implications. Status Labs describes the emerging discipline as “credibility signal engineering”, requiring businesses to ensure that accurate information appears consistently across trusted sources.

Privacy and cybersecurity concerns sit at the centre of these issues. AI systems depend on large datasets, and when those datasets include sensitive information, the risks increase considerably. Weak access controls or poorly defined permissions can allow systems to process data that should never have been exposed.

Nearchou noted that many organisations accumulate data simply because storage is inexpensive.

“Executives often keep data just in case they might need it,” he said. “When AI inherits that data, the risks multiply.”

Anastasiou also warned that the appeal of low-cost cloud-based AI tools can obscure long-term implications.

“The hidden cost of free or cheap cloud AI is the potential for your proprietary data to be used to train future models,” he said. “In a professional landscape built on confidentiality, that is an unacceptable ethical and legal risk.”

These concerns underline the wider debate regarding data sovereignty and control in an environment where many artificial intelligence services operate through global infrastructure.

European legislation is beginning to address this reality, as the digital operational resilience act and the NIS2 directive expand the responsibilities of corporate boards for managing cyber risk.

Financial institutions and technology providers must demonstrate stronger oversight of digital systems and the external partners that support them.

For Cypriot companies operating within European markets, AI governance is no longer purely technical; it is a matter of corporate responsibility.

The ethical debate is further complicated by AI’s ability to generate convincing images, audio and video depicting events that never occurred.

Cyprus has recently moved to criminalise the unauthorised publication of deepfakes, particularly those involving explicit or degrading content.

Akel MP Christos Christofides explained the reasoning behind the law.

“Artificial intelligence is now being used to create sexualised or degrading fake content without the person’s consent, and there is no effective mechanism to withdraw it once circulated.”

Technological responses are emerging alongside legal protections. Anastasiou developed EnPsema to analyse digital material and identify fabrication or manipulation.

The platform evaluates content against verified sources to detect misleading or artificially generated content.

Anastasiou described the initiative as an attempt to strengthen public trust in information.

“EnPsema recognises deception, fabrication and bias in real time, grounded against news sources islandwide,” he said.

“It is a neutral, evidence-based verification system to strengthen the fabric of digital discourse.”

Beyond verification, Anastasiou argues that businesses must reconsider where and how AI systems operate.

“True digital privacy in 2026 is not just about encryption,” he explained.

“It is about physical location. By utilising local language models or dedicated virtual private servers, businesses ensure that their data never leaves their control, effectively creating a digital vault for AI driven insights.”

In his view, the use of locally controlled systems offers a practical path toward protecting sensitive information while still benefiting from automation.

This approach is particularly relevant for industries that handle confidential intelligence or investigative data.

Anastasiou remarked that in certain professional contexts information cannot be shared on public networks at all.

“When we collaborate with legal teams on open-source intelligence and investigations, the information we gather is highly sensitive and often cannot be shared online,” he explained.

“Using local AI allows us to synthesise that intelligence without creating a digital footprint on public servers.”

Cyprus also finds itself participating in the broader European effort to define clear regulatory boundaries for artificial intelligence.

Deputy European affairs minister, Marilena Raouna, has emphasised the need to simplify the legal framework as the EU prepares to implement the Artificial Intelligence Act.

“Simplifying the rules on artificial intelligence is essential to ensure the digital sovereignty of the European Union,” she said during discussions on the legislation.

The initiative forms part of a wider European effort to create harmonised rules that support innovation while maintaining safeguards for citizens and businesses.

Adjustments to implementation timelines and targeted exemptions for smaller enterprises are intended to reduce administrative pressure while maintaining oversight of high-risk systems.

Raouna remarked that the objective is to provide legal certainty for companies operating in a rapidly evolving technological environment.

For Anastasiou the message from European regulators is unmistakable.

“The EU AI Act is a clear signal that the era of experimenting with customer data is over,” he affirmed.

“Ethics in artificial intelligence now means transparency and accountability. Knowing exactly how a model reaches a conclusion and ensuring that no personal data is leaked in the process is no longer someone else’s problem.”

Taken together, these developments show how AI is reshaping the ethical foundations of business in Cyprus.

The technology promises efficiency and insight on a scale unimaginable only a decade ago.

At the same time, it raises fundamental questions about authority, privacy, employment and reputation that cannot be resolved through technology alone.

Organisations must determine who controls automated systems, what information those systems are permitted to access and how their outputs are verified.

Nearchou believes the most significant challenge is cultural rather than technical.

“Stop asking what AI can do and start defining who AI is allowed to be within your organisation’s authority model.”

Artificial intelligence is likely to become even more deeply embedded in the commercial life of the island in the years ahead.

Younger generations entering the workforce already treat it as a routine tool rather than an emerging technology.

Businesses that establish strong ethical frameworks today will influence how artificial intelligence evolves within the Cypriot economy.

The question facing Cyprus is therefore not whether artificial intelligence will be adopted, for that process is already under way.

The challenge is whether companies can integrate it responsibly while preserving the trust that underpins economic growth.

Technology may transform the way organisations operate, but the ethical choices surrounding its use must remain firmly human.