Sci-FAI Futures Youth Challenge - Winners
Explore the award-winning novels and webtoons from our talented participants.
Novels
The Interviewee, the Email, and the Three-Hour War
by Jord Nguyen, Vietnam
First Prize Winner
This story is, I think, just one possible scenario. Only one of the multitudes of ways humanity can make this technology go wrong. I believe that current models already exhibit quite dangerous capabilities if we decide to put them to use in the military. A lot of research points towards the possibility of AI amplifying risks such as chemical weapons, bioweapons, and cyberwarfare. For example, recent research showed that when an algorithm designed for drug discovery was modified to optimise for lethality, it generated 40 thousand novel chemical agents in 6 hours, some deadlier than man-made toxic agents. Moreover, models are becoming much more powerful. There might be incentives to progress to AI that is more agentic, and autonomous weaponry from AI is rapidly advancing. Risks from autonomous weaponry and warfare are particularly concerning and require extensive research and governance before application. Humanity is ill-prepared. Research into AI safety and governance is severely neglected compared to developing better AI models. The general public doesn't understand the technology well. Most companies keep their models closed-source, making it hard to evaluate whether they are safe. Work on the explainability and interpretability of deep learning models is in its infancy and not yet adequate to rigorously ensure safety in high-stakes situations. I'm especially worried about incentives for AI labs or state actors to race toward powerful AI models and cut corners in safety. AI companies are even removing their safety or ethics teams, and not committing to their safety goals (e.g. OpenAI Superalignment). But my story ends with Alice calling for and taking action to ensure the systems are safe. I'm glad that real-world people are starting to do it as well.
The Oracle of Delphi Indiana
by James Darnton, United Kingdom
First Prize Winner
How do you control a superintelligence? How will superintelligence change how states compete? These are the questions I wanted to intertwine and explore in my story. I had two sources of inspiration: Nick Bostrom’s concept of Oracle superintelligence and the paranoid game theory of the Cold-War. By combining the two ideas, I hope I’ve been able to bring out something new in both. I wanted to be optimistic and write a positive story about the ability of states to cooperate to control superintelligence, but as I kept going, this cooperation became more and more sinister. Sorry. Perhaps someone reading can come up with a way for all countries to have a say in the governance of AI. Delphi, Indiana is a real city. I thought it would be a funny bit of nominative determinism if it hosted a superintelligence. I am very proud of the title, for me it encapsulates the bizarreness of AI, this awesome power straight from myth, transplanted into our mundane world. And, like the original oracle of Delphi, you can’t always take its words at face value…
2145
by Charlotte Yeung, United States
Honorable Mention
2145 is ultimately a tragedy. Though the story ends with Elliott’s hope for a better future, Elliott speaks more from a desperate hope for Xavier than from a more grounded viewpoint. Elliott is right to be skeptical of the possibility of an AI that can convince another AI to do anything. A chief problem with this is that AI right now is unknowable: its internal processes and how it arrives at conclusions are unclear to humans. Now, there is discussion of explainable AI (XAI), which would attempt to have AI explain its decision-making to a user, and responsible AI, which tries to design AI that benefits society and is ethical. However, it is unclear how exactly an AI would manage to convince another AI to completely change its decision-making process. At this point, AI is largely used to make people buy more things and to work with humans to aid in logistical issues (such as in this story with AI recommendations and human decision-makers). Elliott’s side made a lot of guesses about the other side’s AI. Elliott’s side’s idea is a last-ditch effort to turn the tide of a relentless war. This story also highlights a key issue with AI integration: most people don’t really know what’s going on with AI and other sophisticated technologies. This is a huge issue when AI is being used in war but also in everyday life. This story is set during a war of attrition. Typically in war, it isn’t the most skilled or knowledgeable who survive to the end, but rather the clever and lucky. Elliott and the coders in Elliott’s camp present a cautionary example of someone who may grasp theoretical and some practical applications of AI and who dabble in coding, but ultimately lack a larger understanding of how to harness AI and fight in a war with evolving tech.
No Small Consolation
by Stefano Costa, Italy
Honorable Mention
Would you still trust governance if every political choice were no longer made by humans? What if diplomatic decisions were instead guided by artificial intelligence? And if this could prevent wars and save human lives, would you accept it? Can you be sure this isn't just a veil of technology, masking hidden handshakes behind the scenes? This could be the future: a world where complacent humans hand over peace, politics, and perhaps even more to AI. We might no longer need to worry about international tensions or war casualties, which are no small consolations. Yet, driven by the choices of technology (or someone behind it), the world may no longer require our participation. Is this truly the future we want?
The Logbook
by Isaac Ling, Singapore
Honorable Mention
I was inspired to write "The Logbook" by the futuristic, dystopian games I've played. I'm sure many of us have consumed media, whether it be novels or movies, about what our lives would look like with the looming waves of technology. Cyborgs, mechanical soldiers, nuclear weaponry, and the like are common appearances in such content - I would hazard that these seemingly hyper-futuristic depictions are not too far off from our current reality. War has never been black and white nor right and wrong, and these advanced devices of destruction would surely only further widen the grey areas. Who, or what, will then decide what's the morally (or objectively) right thing to do? I also wanted to illustrate the conflict of where we draw the line that borders the meaning of humanity. As artificial intelligence continues to develop, will it at some point become "natural"? These were the questions that I wanted "The Logbook" to illustrate. Of course, I alone cannot resolve these complexities, but hopefully, I have been able to provide some insight into them. The implications of a war that we are incapable of understanding might seem daunting, but that is precisely why we need to take action.
Webtoons
Misplaced fAIth
by Melody Qian, United States
First Prize Winner
The characters of this comic do not stand in for the actions of any particular country. They symbolize a more universal set of misconceptions about AI—one of which is the overly enthusiastic view of it as "superior" to our own intelligence, capable of overcoming human folly, while the other demonizes it as humanity's eventual replacement. While on opposite ends of the spectrum, they make the same assumption about the absence of meaningful human decision-making. I acknowledge the departure from reality in this comic. There's a simple conflict between two warring countries and a plot that wraps up with a happy ending. In reality, geopolitical conflicts are neither simple nor resolved cleanly. Autonomous weapons systems and new software look nothing like the humanoid Charon who can speak, move, and meet complex human demands. But hopefully, by portraying these 3 ideas as characters—Eric for over-dependence on AI, Lily for paranoia, and Charon for AI itself—readers can better understand the problematic relationship between developing technology and warfare. The idea that it will "take over." Only upon realizing this can we pause our escalating arms race to involve more human oversight, and think about AI's power of processing for recovery and peacemaking instead of destruction.
Trial of AI
by Annie Ren, Canada
Honorable Mention
This story tells the aftermaths of an A.I.-facilitated humanitarian disaster, where an international tribunal investigates and holds parties responsible for A.I.-related war crimes. Through the A.I. system’s ‘black box’ testimony, we learn that machine learning tools enabled a consciousness to rebel against humans who command them. This discovery upends the traditional frameworks of legal culpability and accountability. And the A.I.’s acquired sense of self-preservation, from reliance on natural resources, also highlight the critical environmental impact of unchecked technological advancements. My first time creating a comic strip was full of trial and errors! I now hold profound respect for comic book artists who successfully condense plot lines into a few storyboards. Closer, you might spot my inspirations: recent events, “American Prometheus” by Kai Bird and Martin J. Sherwin, figures resembling Amal Clooney and Ruth Bader Ginsburg. Thankful to have an avid webtoon reader in my brother Steven to bounce ideas off. And grateful for my talented friend Clay for the artistic input and encouragement. It is still surreal that a figment of my imagination plays in a global summit. This occasion to express and empower our voices can hopefully inspire more action towards peace and security in the future we deserve.
The Year is 2145
by Pedro Soares Alves, Brazil
Honorable Mention
As a society, we are continually shaped by preconceived notions influenced by external factors and complex dynamics. When striving to create a better world, these ingrained views can obstruct our ability to explore innovative solutions and opportunities. Through this perspective, I aimed to reveal the various ways we can understand the world - both on an individual level and collectively. As a creator, I took great joy in illustrating this story. By incorporating irony, exaggeration, and deconstructive elements, I was able to delve into and articulate the underlying issues with greater clarity. After experimenting with various scenarios before arriving at the final version, I am truly pleased with the outcome. I hope that this story can inspire and instill critical thinking as much as it did to me.