BACK

An intelligent future

Kamila Hyat
Saturday, Apr 13, 2024

Outside the realm of science fiction novels to which artificial intelligence was previously restricted, it is now amongst us in more and more fields of life. This brings both benefits and disadvantages. It threatens to take away jobs while adding value to work and high quality in fields such as engineering, surgery and other spheres of life.

AI has also made its presence felt on social media and can be spotted more often. It comes in the form of fabricated news stories or falsified news, write-ups generated by typing in a few phrases into the relevant website, and as images conjured up from the minds of their creators. In some cases, these images are harmless, but in others, this is not the case. They mislead, they delude, and they can sometimes be dangerous. We have experienced what harm fake information over social media can cause most notably over the past five years.

Of course, everyone around the world is a potential victim of social media publications and untruthful stories or posts placed on them. Vulnerable populations, such as those that exist in Pakistan with little awareness that all that appears on the screens is not true – indeed, in most cases, it is fake – are especially at risk.

These include audiences who are elderly and not well-acquainted with how various websites or various posts work, the very young, or Gen Z, as they have been labeled, turning to social media in flocks all over the world, as well as those who are less literate and less well educated than potentially more savvy users who are well-versed in the misuse of technology and aware of what the latest methods are of bringing us fake or completely falsified information.

Pakistan has a history of falling victim to fake stories placed on social media. At the present moment, it has been used for political purposes in particular. But in this age, we need to find ways to make people more aware that what they are seeing may be potentially untrue and that they need to look at it with caution. This is especially relevant as many of the stories placed over the net relate to health, medication, 'miracles', and other such realms of life.

Of course, there can only be much uncertainty over how to educate the entire population, the majority of it now armed with smartphones, over how to handle this new influx of information. But the effort has to be made. In the first place, the mainstream media, which remains the most reliable source of information for all its faults and all its follies, needs to take on social media by informing people how to determine what is real and what is simply a fabrication. This can be done, as many of us know, by checking the source of the information that has been posted with websites available to help in this task.

There are also other means to look at an image and determine if it has been doctored, tampered with or simply created by AI. Of course, by the time people determine if what they are seeing is real, the damage may have been done. But at least being more wary of what we see over social media can help make the world a little more tangible with daily events taking on more reliability in the eyes of a deluded population.

The task of educating people can begin at schools. Teenagers and children below this age are often the key victims of social media bullying and photographs that have been tampered with in various ways. If these pupils can be taught how to spot a fake photograph, it would be easier to prevent a falsified image from spreading and causing more and more harm. The same needs to be done in places of higher learning, possibly even in office environments or through ads on social media and at other places.

Most important of all is the need to dissuade or prevent people as far as possible from aimlessly forwarding material that they do not know for certain to be fact onto others through WhatsApp or many other media platforms that exist for this purpose. The mindless forwarding of items has become something of a menace. It is responsible not only for spreading rumours and creating false panic but also for doing actual damage to individuals and groups. Campaigns on polio drops, the Covid-19 vaccine and much else come to mind. There are also other ways in which faked information can cause great harm.

With the proliferation of AI-generated content on social media, the need for digital literacy has never been more urgent. In this age, as we move into a world where AI will be with us more and more often, we need to look carefully for the truth and check on what has been said and why. In some cases, it is easy to spot the forgeries and underhand doings that have been posted on social media. In many other cases, the task is far harder and more demanding, but it has become crucial in our lives.

Fewer and fewer people live without some kind of access to social media forums, they therefore need to understand how these forums work and how to look out for deliberate efforts to mislead them. This is not an easy task given the versatility and flexibility of AI. Collaborative efforts between tech companies, educators, policymakers, and civil society are essential to address the complex challenges posed by AI-generated content.

Developing AI algorithms capable of detecting and flagging fake content could serve as a valuable tool in the fight against misinformation. The evolving nature of AI technology demands continuous adaptation and innovation in our strategies for combating its misuse in spreading falsehoods.

The task can become even harder as AI develops and threatens ethical conduct in academics and other fields. But the effort to stop the spreading of lies and untruths has to be made. If we do not step forward to do this, we will in many ways be doomed as we step into a potentially bright new future.

The writer is a freelance columnist and former newspaper editor. She can be reached at:

kamilahyat@hotmail.com