Transcend by Faisal Hoque

Since time immemorial, human beings have searched for what the European alchemists called “the philosopher’s stone”: a mythical, magical material capable of granting humans their deepest desires. Versions of the philosopher’s stone are found in many traditions around the world—in Hinduism and Buddhism, the search is for the Chintamani, a jewel capable of fulfilling wishes, while the Daoist alchemists strove to create the Grand Elixir, a compound that would prolong life forever.

This cross-cultural notion reflects a universal human yearning: our desire for a tool that will allow us to transcend the limitations of the human condition, a tool with the power to bring anything we can imagine into reality.

For many advocates of artificial intelligence, AI is the philosopher’s stone recast for our modern world. Visions of an AI-driven future hold this new technology up as an almost magical tool that will soon be able to fulfil all human wishes. It will grant us the capacity to see further and think more deeply than has ever been possible before. It will transform education and health care. And—an idea that would have resonated with many a Renaissance magician—some even suggest that AI will make humans, or at least some humans, immortal.

But if we are to tap this source of power, we would do well to listen to the lessons of the past. In every tradition that took the search for the philosopher’s stone seriously, there are urgent notes of caution about approaching this magic unprepared. We find carefully elaborated frameworks designed to ensure the effective use of this incredible power. And we find warnings about what happens when the power is irresponsibly
misused.

The effect of the philosopher’s stone is not purely a matter of a force directed out at the world. On the contrary, the warnings also point to the change this uncanny material can bring about on the individuals who use it. To proceed safely, we must prepare ourselves, and this is a moral and spiritual endeavor as much as it is a case of grappling with technical details.

The view that we need to balance the risks and benefits of AI is not universally held. At one extreme, there are AI maximalists who believe in the untrammeled development of all forms of artificial intelligence and who see safeguards as misguided restrictions that only get in the way of inevitable progress. At the other extreme, we have the AI Luddites, who think that we should shut down all development immediately because AI poses an extinction-level threat to humanity. The only responsible policy, they argue, is to walk away from this perilous technology completely.

In contrast to these two positions, we believe that the Greek philosopher Aristotle was on to something when he taught that wisdom lies in moderation. We will argue in this book that the best path to tread when it comes to AI is one that takes us through the middle ground between extreme visions of the future. With respect to the AI maximalists, while they are right to extol the potential of AI, they are mistaken in thinking that there are no significant dangers. With respect to the AI Luddites, they are right to warn about the potential dangers but mistaken about how we should address them. The AI genie is out of the bottle. We are not going to put it back in. If we are to have any chance of responding wisely to the rise of AI, we must grasp and balance both the opportunities and the dangers inherent in this technology. The question is no longer whether we should use AI, but how we can use it responsibly
and well.

AI opens up untold possibilities for realizing human potential—and for destroying it. But what is human potential? And what is so special and valuable about human beings that makes them worth protecting? Traditionally, questions like these have been treated as belonging to the domain of philosophy. With the advent of AI, they become deeply practical.

As well as asking questions about our evolving relationship with AI, we must also recognize a stark but crucial fact: The further we look into the future, the less we can know about the precise capabilities and threats that AI will manifest. The answer to the question of what AI might one day become is “No one really knows.” Acknowledging this radical uncertainty about the future is an essential starting point for responding wisely to the changing AI landscape and must be a central feature of any approach we develop.

So, we are faced with a tool of potentially limitless power, a technology with an almost unimaginable capacity to both benefit and harm us. And we have no real idea how this technology will evolve over the long term. What can we do? How can we respond? How can we give ourselves the best chance of being wise users of this modern philosopher’s stone and wise guardians of our future?

Any answer must be speculative and provisional. But so long as we start with these limitations in mind, we can develop adaptable frameworks that are capable of changing to meet the evolving challenges of this new field. In the central chapters of this book, we offer two frameworks that aim to do just this: the OPEN framework for unlocking the potential of AI and the CARE framework for mitigating its dangers. Each framework provides a practical launchpad for planning that can be applied immediately, while also incorporating the need to reflect on our plans and take account of the changing AI landscape over time. Using these frameworks will enable any individual or organization to develop safe, effective, and future-proofed strategies for collaborating with AI.

BEING HUMAN IN THE AGE OF AI

If we approach AI with an open mind and an attitude of care for ourselves, each other, and future generations, this technology has the potential to unlock new realms of human capability. But it also demands that we examine and challenge what it is to be human. AI is not just built by human beings; it is built on human beings. It learns from human behavior and human culture, from what we say, do, and produce. AI is humanity’s mirror, its truth-teller, its child. It challenges us to confront our past and to step into a future in which we will become different.

The philosopher’s stone turns base metals into gold and promises immortality to anyone who possesses it. But the ultimate prize and promise is something even greater: the ability to rise above our limitations and journey into higher realms. If we are to finally grasp what humanity has sought for so long, we will need to detach ourselves from the dark and ugly sides of being human and devote ourselves to cultivating and promoting the highest possibilities of the human spirit. We will need to transcend ourselves and our limited view of what it is to be human.

To achieve this goal, we must move with hope and with care, with a mature, responsible optimism, with epistemic humility and a radical openness to the unknown. Most of all, we must start somewhere. It is in this spirit that we offer our ideas here—as stepping-off points for what is shaping up to be humanity’s grandest project and its greatest challenge. The stakes are immense. But so is the prize.

 

CHAPTER 1

HOW TO THINK ABOUT HUMANS

We often talk about AI as if it is “Coming soon to a reality near you!” Just around the corner is tech that will improve the accuracy of cancer diagnoses, power self-driving cars, control bionic limbs, and provide the backbone for smart, adaptive work environments. In a couple of years, we can expect AI to be writing readable books, planning corporate strategies, and providing all sorts of direct assistance to individuals. But it’s not just a coming attraction. The truth is, it’s already here.

Most of us have been carrying some basic AI functionality around in our pockets for years now. Facial recognition technology to unlock our phones, the autocomplete tool in messaging apps, and the algorithms that suggest the next song or video on a streaming platform all rely on some form of machine learning. For the most part, the way AI has moved into our lives so far seems pretty innocuous. It simply enhances or extends existing human capabilities, letting us do things a little faster or a little more effectively. These changes may raise some ethical issues here and there, but there’s nothing particularly dramatic that we need to worry about. No one is really concerned about losing their humanity to Google Maps or Netflix’s algorithm, right? It’s the big
picture changes that are the stuff we should worry about: the conscious machines enslaving us or the intelligent nanobots reducing the planet to
a fine paste as they replicate endlessly.

But if we think a little harder about the apparently trivial ways in which AI intersects with our lives, it quickly becomes clear that there are bigger issues at play. Take autocomplete, one of the most ubiquitous and apparently innocuous machine learning applications of all. Who could possibly object to their phone suggesting the most likely word to use next in a sentence, saving the user dozens of taps on their screen for every message they send? Not us. We like the convenience as much as the next person. But we do want to point out something important here.

Autocomplete makes life a little bit easier. And that’s great. But in making things easier, it also creates a motivating force that feeds into our decision-making processes. It is easier to use this functionality than to not use it, and this convenience gives us a reason to accept the word offered by the algorithm rather than expressing ourselves in a more detailed, more nuanced, or otherwise less statistically commonplace way. Sometimes we’ll be laser focused on using a certain phrase, so we’ll ignore the suggestion that comes up. If we’re really punctilious about our prose, we might have several goes at typing out a word even if it’s a challenge to spell. But often—very often—we’ll accept the easy option rather than spending ten or twenty times as long to get our preferred word just right. Without making any conscious decision, and without anything being forced on us, we find our use of language constrained, variety and precision sacrificed on the altar of convenience. And while this is only a minor instance, it points the way to how scaled-up interactions with
AI systems could lead to much more significant constraints. Once we begin thinking down this path, we quickly find ourselves confronted with questions about the value of freedom versus efficiency, about what counts as help versus what counts as control, about which uses of AI are truly enhancements for human beings and which will ultimately end up harming us.

As soon as we start thinking in any serious way about AI, we immediately become embroiled in some very deep questions about human beings, about values, about freedom, and about how and why we make the choices we do. We will argue that these philosophical questions are both foundational and deeply practical. They are foundational because our philosophical views regarding the nature and value of humanity are the basis from which we will think about how to respond to AI. And they are practical simply because it is necessary to think about them when deciding how to respond well to the emergence of AI.

In this chapter, we will not argue that there is something essential to humanity or something unique about it that needs to be protected or enhanced—that may be the case, but it is not a line we think useful to pursue. Instead, we will make a much more basic claim. We will claim that the core value of humanity is our ability to make choices about what matters to us. When AI enhances this ability, we should actively pursue it; but when AI detracts from this ability, we should flee as fast and as far as we can. We will also suggest that most of the capabilities AI is likely to develop will fall into a more neutral middle ground. Sometimes, these capacities won’t matter at all to the question of what it means to be human in the age of AI. And sometimes humans will
make them matter even though there is nothing intrinsic to them which demands that they must. Ultimately, it will turn out that understanding the importance of AI—and understanding its potential for good and ill—will depend critically on improving our understanding of ourselves.

GROWN-UP QUESTIONS

“When I was a child,” says Paul the Apostle in his First Epistle to the Corinthians, “I spake as a child, I understood as a child: but when I became a man, I put away childish things” (KJV).

There is a deep wisdom in this. To everything there is a season, and there comes a day for all of us when it is time to grow up (some of us choose not to, or are unable to, but that’s another story). And as with all truly deep wisdom, it immediately throws up a question: What are childish things? What is it that we should put away when we grow up?

The Athenian aristocrat Callicles had a surprising answer to this. “When I see philosophy in a young boy,” he says to Socrates, “I approve of it. But when I see an older man still engaging in philosophy and not giving it up, I think such a man by this time deserves a flogging.”

Socrates disagrees with Callicles, and so do we. Only philosophy has the resources to guide us through questions about what it is to be human, what matters about the human experience, what a good life is for a human being, and what is worth protecting and enhancing in our lives. And our contention is that we cannot think properly about AI without thinking properly about these fundamental philosophical questions.

THE MEANING AND VALUE OF HUMAN BEINGS

Let’s begin with a simple fact: AI will have all manner of effects over the course of its development. Now let’s ask a simple question: Should we care about all of those effects?

To see why the answer is no, consider the following. Let’s imagine that one of the effects of AI is that autocomplete functionality increases the usage of the letter “s” in English-language text and chat messaging by 8 percent. Let’s stipulate that this change is completely irrelevant to the interests of human beings or any other living beings or any systems that affect sentient beings. Let’s further stipulate that the fact that the letter “s” appears 8 percent more frequently also has no effect on any of the things that are relevant to the interests of sentient beings.

Now, from the perspective of human beings, the only reasonable response to The Great S Revolution is: So what? Who cares? The frequency of the use of this letter just doesn’t matter. We simply end up residing in a slightly more sibilant sonic sphere.

This thought experiment shows something that is almost banal but that is nonetheless profoundly important in figuring out how to respond to AI. AI will have a bunch of effects, and not all of them will matter to human beings. And even among the effects that do matter, there will be differences in how much they matter. Any wise response to AI must take both these things into account.

Put simply, if we want to understand what matters about AI, we first need to gain some clarity about what matters to and for humans, and about how much it matters. Or, to put it another way, we need to start trying to understand what is valuable to human beings and about human beings. If we don’t do this, it is almost inevitable that we will waste a great deal of time, energy, and resources on actions and policies that do little or nothing to unlock human potential. Worse, we will likely fail to protect what does matter about humanity and to humanity.

To start ourselves down this essential road, we need to engage with even more fundamental philosophical questions: What is a human being? What does it mean to be human? What is the proper aim of human life? What should human beings strive toward?

Our answers to these questions will have an enormous effect on what we think we need to do about AI. Here are a couple of examples to illustrate. Aristotle famously defined man as a rational animal. Let’s say, then, that this is the “essence” of human beings. What is special about human beings is our capacity for rational thought and our ability to act based on rationality rather than impulse. If this is what we believe, then when we think about how to respond to AI, we will place enormous importance on what we need to do to make sure that human beings continue to exercise and cultivate those capacities. For instance, we may argue that it is necessary to keep AI companions out of educational settings, because it is important to make sure that children develop the capacities for rational thought that are the essence of humanity rather than always and immediately turning to an AI assistant for answers.

On the other hand, let’s say that instead of defining humanity in terms of rationality, we believe that the secret sauce to being human is our ability to feel. Further, let’s say we narrow this down to our ability to feel love. If we believe that this is the most special and important thing about human beings, then we may not care at all about AI assistants helping children with mathematical calculations. On the humans-as-lovers model, the ability to maneuver symbols according to certain conventions would be seen as irrelevant to the value of humanity. We would instead see the opportunities and threats of AI through the lens of what it is about AI that can help and harm human beings in developing and exercising their capacity to love.

So, to respond well to AI, we need to understand what to respond to. We need to understand what effects of AI will be relevant, and we need to understand their relative importance. This requires understanding more than AI. It requires understanding human beings and the things that matter to human beings. And to understand these things, we need to go back one step further and ask—what does it mean to be human? What (if anything) is special about humanity?

AI AND VALUE JUDGMENTS

One way of putting the above is to say that responding well to AI requires making value judgments. And these are not marginal value judgments about whether brown bread or white bread is better. They are value judgments about what fundamentally matters to human beings and about human beings.

There are at least two different questions we could be asking when we make those judgments. The first is an empirical question—what, as a matter of fact, do human beings find valuable? Answering this question requires the application of empirical methods, such as observation, quantitative and qualitative research, data analysis, and so on. The second question we could be asking is normative. To put it simply, this question asks not what people do in fact value but what they should value.

The second version of this question is uncomfortable enough to ask, let alone try to answer. It sounds elitist and authoritarian, to say nothing of anti-democratic and arrogant. Who are we to tell someone what should matter to them?

The problem is, we don’t have a choice. When it comes to AI, we cannot evade making value judgments, and we cannot avoid the responsibility of making normative claims that we think should apply to everyone. The only choice we have is whether we do this thoughtfully or thoughtlessly. This is so for three reasons.

The first reason is one of consistency. Imagine that someone says: We don’t want to make general value judgments. We do not have the right to tell people what is valuable or what they should care about. Let people decide for themselves what matters. Rather than developing policies based on value judgments about what matters, let’s start from the assumption that people should have as much room as possible to decide for themselves what matters. This is a very reasonable thing to say, but the point is that it also already includes a value judgment, namely, the judgment that policy should be based around giving people freedom to choose what matters to them. And one cannot consistently make that claim and at the same time argue that it is impermissible to make value judgments.

The second reason begins with the Bhagavad Gita, that great and ancient Indian teaching. The prince Arjuna is on the battlefield, and he sees honored teachers and family elders on the opposite side. He is overcome with despair and dejection, and he lays down his arms. He wants no part in killing them. “What should I do?” he asks his charioteer, the god Krishna.

Taken in its entirety, Krishna’s answer is the ultimate flowering of one of the great spiritual traditions of the world, but here we are concerned with one very small part of it. “No one,” says Krishna, “exists for even an instant without performing action; however unwilling, every being is forced to act.”

And that is exactly how it is with AI today. Choosing to do nothing about AI is only apparently “inaction”; in reality, it is just as much an action as any other choice. Put another way, we might say that when it comes to AI, we don’t have a choice about whether to respond. Inevitably, whether we do something or nothing, we necessarily do respond. Our only choice is how we do so.

Why does this mean we need to make value judgments? Quite simply, because responding to AI involves choice. We must choose between different possibilities, different responses, different visions—and all these choices are based ultimately on value judgments. To choose is to be forced to have a criterion of choice, after all, and any principled criterion will represent a value judgment of some kind.

The third and final reason for the necessity of making value judgments is that the development of AI itself constantly requires such judgments. We can see this by looking at a very general and very important example. Large language models (LLMs), such as ChatGPT, Claude, and Llama, reproduce patterns in words or images, and they “learn” which patterns to reproduce in which contexts. To learn how to match patterns to context, they must have a criterion for what constitutes a good pattern. If we want an AI to learn to speak well, then it must be able to recognize the patterns that constitute good speech. If we want an AI to learn to reason like a human, then it must be able to judge potential patterns against an ideal of good human reasoning. There is a very real sense, then, in which value judgments are inescapably built into the very fabric of AI. And if we want to think well about AI, we need to recognize this.

The British economist John Maynard Keynes said that “practical men, who believe themselves to be quite exempt from any intellectual influences, are usually the slaves of some defunct economist.” That sounds about right for the practically minded individuals who refuse to reflect on the normative and philosophical aspects of AI. If you find yourself thinking that you don’t need to figure out answers to the questions we’ve described, it’s probably because you have already swallowed someone else’s answers without realizing it.

We cannot, then, avoid making value judgments when we think about AI and how to respond to it. But as soon as we have accepted this truth, a very hard problem arises: It is notoriously difficult to get value judgments right. Human history is littered with examples of men—and it was usually men, we’re afraid—who were absolutely certain they knew what was right, and the story of these men too often ends with bombs and bullets and gulags. So, we’re faced with something of a dilemma. On the one hand, we need to make value judgments. On the other hand, we should also be extremely humble about our ability to make them accurately and about the consequences of getting them wrong. How to proceed?

We’ve tried to take both these things into account when developing our model of how to think well about AI. We have incorporated the necessity and difficulty of making value judgments into the decision-making frameworks (OPEN and CARE) that we develop in this book. As you will see in more detail in later chapters, both frameworks require that you make value judgments—while simultaneously giving you the freedom to decide for yourself what it is that you value.

We will also need to make value judgments of our own in writing this book—we could hardly do otherwise, given what we have just said about their necessity. When we do, we will try to be fully explicit about both the fact that we are doing so and about the limitations of our judgments. We stand by our normative commitments, but we try to be as humble as we can about them.

The great English theater director Peter Brook sums up our attitude here rather well: “For a point of view to be of any use at all one must commit oneself totally to it, one must defend it to the very death. Yet, at the same time, there is an inner voice that murmurs: ‘Don’t take it too seriously. Hold on tight, let go lightly.’”

With all that said, it’s time to discuss the big question: What is so valuable about humanity?

UNIQUE ≠ VALUABLE

One natural way of thinking about the value of humanity is to ask what is unique about human beings. This is especially tempting when we’re asking the question in the context of AI, of course, because one of the big fears/dreams (delete as applicable) about AI is that it will at some point develop into something that we will be compelled to call human.

Here’s the thing about temptation—it’s probably best to resist it. For about as long as there have been people, there have been attempts to identify something uniquely special about them. We have already seen one candidate above—Aristotle’s definition of human beings as rational animals. Alternatively, some people might say that human beings are special because they are God’s creation, because they use language, or because they can reflect on the contents of their consciousness.

We could go on, but this small sample is enough to make the point. None of those attempts, or indeed any others, have succeeded in identifying a unique property that makes human beings special. This failure isn’t accidental. It turns out that it is actually very difficult to identify a property that is simultaneously (a) valuable, (b) possessed by all human beings, and (c) not possessed by any other creature.


What makes humans unique— according to an AI

Hey Bard6—in one paragraph, tell us what makes humans unique and sets them apart from AIs.

Bard: While AI excels in processing information and specific tasks, humans stand out due to our embodied experience, open-ended creativity, social and emotional intelligence, and intrinsic motivations. Our physical bodies shape our learning, emotions, and decision-making, while our flexible thinking allows us to innovate and adapt beyond programmed tasks. We navigate complex social interactions with empathy and build communities driven by values and personal growth, unlike AI’s current objective-oriented approach. Though AI is evolving, these unique human qualities set us apart in our ability to experience, create, and connect in profound ways.

Thanks, Bard. But “Our physical
bodies”…?


Consider rationality, the feature so beloved by so many philosophers throughout history. If we claim that this is our unique defining feature, we face pressure from two directions. From one direction, the problem is that the feature excludes some human beings. What about babies? Someone in a coma? Someone born with a certain type of severe brain damage? Are these kinds of humans not as special as other human beings? From the other direction, the problem is that the feature includes some nonhuman animals as well—if rationality is taken to be the ability to act for reasons, then many animals must
be considered rational.

Moreover, even if you do succeed in finding something unique about human beings, you face the problem of linking that thing to value. And this link is not at all obvious. Hitler was unique—and we should be very grateful that he was—but the leap from there to the assumption that he was valuable is an enormous one. Or from the other angle, to use Aristotle’s idea again, even if we say that the capacity to be rational is valuable, this is clearly not a unique capacity. Simply put, what is unique is not necessarily for that reason valuable. And what is valuable is not necessarily for that reason unique.

The lesson we draw is that we should forget about searching for a unique or distinctive feature about human beings that makes us valuable. Instead, let’s just directly ask: What is valuable about human beings?

Now, the best minds in the history of humanity have thought about this question for thousands of years without being able to settle it. We’re not going to presume to be able to do so here. But what we can usefully do is think about the value of humans in the context of AI. We can narrow the question by asking: In an age in which machines can write books, give legal advice, and make better medical decisions than most doctors, what is the value of human beings?

THE VALUE OF HUMANITY IN THE AGE OF AI

Imagine you’re playing a game of chess. You move a pawn to d4. Someone asks you, why did you do that?

Because it means I control the center, you reply. No, they say. That’s not what I meant. I mean, why are you wasting your time playing this game?

This imaginary conversation reveals two levels at which we can question the point or value or meaning of something. From within the game of chess, we can ask, “Why did you move the pawn?” and then the first answer is the right sort of answer—because it was an action that increased your chances of winning the game. But we can step further back and ask, “What is the point of chess at all?” And at that level, it is missing the point to say that moving the pawn to d4 increases your chances of victory.

What does all this have to do with the value of human beings in the age of AI?

Quite a lot, as it happens.

AI is currently a pattern-recognition and -replication machine. It learns sets of patterns that make sense; it learns about “games,” about practices of meaning and institutions of sense-making; and it is exhilaratingly and terrifyingly good at playing within the confines of those patterns. But what AI can’t do, at least not yet and maybe not ever, is make that typically human move of suddenly stopping, taking a breath, and thinking: Wait. Is there actually any point to this pattern? Is it good? Is it meaningful? Is it worthwhile?

An AI chess program must work within the rules of the game of chess. Human beings can decide to smash the board and walk away. Another word for this is freedom. This freedom, we suggest, is a very significant part of what is valuable about humanity—we are free in a way that AI currently is not. And even if AI ends up being free in this way, it doesn’t matter to the point we’re making. That point is: The human ability to be free is valuable to humans in and of itself, and not because it is something that only humans can do.

Now, we are not saying that human beings always do good things with that freedom. The briefest glance at human history, or just your social media feed, will tell you that humans often do very bad things with their freedom. What we are saying is, regardless of whether we do good or bad things with it, freedom itself is valuable.

This leads directly to the second thing we think is particularly valuable about human beings in the age of AI. The flip side of being able to break patterns is that we’re also able to make them. And some of the patterns we make have names like Truth, Beauty, and Love. Human beings have the ability to fill the universe with meaning simply by caring about it. This is a strange and wondrous ability, and again, it is not one that AI currently possesses. Nor do we know if it is an ability that AI will ever truly be able to possess.

The third and final thing that we want to pick out as valuable about humanity begins from a very simple fact. Human beings aren’t just rational animals. They are also emotional animals. Human beings can feel wonder and awe, joy and sorrow, love and hate. Right now, that is not something that AI can do. Further, humans are social animals. We have relationships. We encounter other minds and hearts and bodies and are transformed by them. We can share moments of connection that are beyond the reach of words to describe. If this is not one of the most valuable things about human beings, what could be?

Now, we cannot rule out the possibility that machines will, eventually, have the type of consciousness that humans have now. It could well turn out to be the case that machines will have a very similar kind of freedom or that they will be able to feel things or have loving relationships. It is entirely possible, in other words, that the things we have  picked out as valuable about human beings end up being things that AI also has. But that would not mean that those things stopped being valuable. It would just mean that there is another type of agent in the world that has the same valuable qualities. If this turns out to be the case, we will have to answer some very hard questions about our responsibilities toward these agents and their responsibilities toward humans.

KEY TAKEAWAYS

  • We often speak about AI as if it is something coming in the future. But the truth is, it is already here and it is already deeply involved in our lives. From the shows we watch and the music we listen to, to the people we interact with online, AI already powers many of the algorithms that shape our daily experience.
  • Only some things matter. AI is going to cause unimaginable change and will have an almost infinite number of effects. But many of these will be unimportant, and we shouldn’t waste our time thinking about them. We should focus our efforts and abilities on identifying what the truly significant impacts of AI might be and on figuring out how to respond to them.
  • Human beings are free to choose and are able to feel. This means we can break patterns instead of following them, and it means we can choose what and whom we care about. We can love, and we can choose what we love. These are foundational values of being human, and they will remain important regardless of how AI develops.

The excerpt above, from the Introduction to the book Transcend: Unlocking Humanity in the Age of AI, is included here on this website by permission of the copyright holder, Faisal Hoque.

Transcend: Unlocking Humanity in the Age of AI is published by POST HILL PRESS, New York & Nashville, U.S.A.
ISBN: 979-8-89565-010-3
ISBN (eBook): 979-8-89565-011-0
© 2025 by Faisal Hoque


About the Author

Faisal Hoque

View all posts by Faisal Hoque