Applyo - College Application Platform

CAT 2025 Slot 3 VARC Question & Solution

Reading ComprehensionHard

Passage

The passage below is accompanied by four questions. Based on the passage, choose the best answer for each question.

Imagine a world in which artificial intelligence is entrusted with the highest moral responsibilities: sentencing criminals, allocating medical resources, and even mediating conflicts between nations. This might seem like the pinnacle of human progress: an entity unburdened by emotion, prejudice or inconsistency, making ethical decisions with impeccable precision. . . .

Yet beneath this vision of an idealised moral arbiter lies a fundamental question: can a machine understand morality as humans do, or is it confined to a simulacrum of ethical reasoning? AI might replicate human decisions without improving on them, carrying forward the same biases, blind spots and cultural distortions from human moral judgment. In trying to emulate us, it might only reproduce our limitations, not transcend them. But there is a deeper concern. Moral judgment draws on intuition, historical awareness and context - qualities that resist formalisation. Ethics may be so embedded in lived experience that any attempt to encode it into formal structures risks flattening its most essential features. If so, AI would not merely reflect human shortcomings; it would strip morality of the very depth that makes ethical reflection possible in the first place.

Still, many have tried to formalise ethics, by treating certain moral claims not as conclusions, but as starting points. A classic example comes from utilitarianism, which often takes as a foundational axiom the principle that one should act to maximise overall wellbeing. From this, more specific principles can be derived, for example, that it is right to benefit the greatest number, or that actions should be judged by their consequences for total happiness. As computational resources increase, AI becomes increasingly well-suited to the task of starting from fixed ethical assumptions and reasoning through their implications in complex situations.

But what, exactly, does it mean to formalise something like ethics? The question is easier to grasp by looking at fields in which formal systems have long played a central role. Physics, for instance, has relied on formalisation for centuries. There is no single physical theory that explains everything. Instead, we have many physical theories, each designed to describe specific aspects of the Universe: from the behaviour of quarks and electrons to the motion of galaxies. These theories often diverge. Aristotelian physics, for instance, explained falling objects in terms of natural motion toward Earth's centre; Newtonian mechanics replaced this with a universal force of gravity. These explanations are not just different; they are incompatible. Yet both share a common structure: they begin with basic postulates - assumptions about motion, force or mass - and derive increasingly complex consequences. . . .

Ethical theories have a similar structure. Like physical theories, they attempt to describe a domain - in this case, the moral landscape. They aim to answer questions about which actions are right or wrong, and why. These theories also diverge and, even when they recommend similar actions, such as giving to charity, they justify them in different ways. Ethical theories also often begin with a small set of foundational principles or claims, from which they reason about more complex moral problems.

Question 1

Choose the one option below that comes closest to being the opposite of “utilitarianism”.

The committee adopted a non-egoist framework, ranking policies by their contribution to overall social welfare and treating self-interest as a derivative concern within institutional evaluation.
The authors advocated an absolutist stance, following exceptionless rules regardless of outcomes and evaluating choices by broadest societal benefit.
The council followed a prioritarian approach, assigning greater moral weight to improvements for the worst-off rather than to maximising total welfare across the affected population.
The policy was cast as deontological ethics, selecting the option that delivered the highest total benefit to citizens while presenting duty as a secondary consideration in public decision-making.
Solution:

Main Idea of the Question

The question asks which position is most opposite to utilitarianism.
According to the passage, utilitarianism is defined as an ethical approach that:

  • Seeks to maximize overall wellbeing or total welfare
  • Focuses on the sum total of benefits
  • Does not give special weight to who receives the benefit, as long as the total outcome is maximized

The key feature of utilitarianism is its concern with aggregate outcomes, not distribution.


Explanation of the Correct Answer

Why Option C Is Correct

Option C represents a prioritarian approach, which directly contrasts with utilitarianism:

  • It does not aim to maximize total welfare
  • Instead, it gives greater moral importance to helping those who are worst off
  • This may come at the cost of a lower total level of wellbeing

By prioritising distribution over aggregate outcomes, this approach challenges the core logic of utilitarianism.

Hence, Option C is the position most opposed to utilitarianism.


Why the Other Options Are Incorrect

  • Option A:

    • Explicitly ranks policies by overall social welfare
    • This is a direct expression of utilitarian thinking
  • Option B:

    • Appears rule-based at first glance
    • However, it still appeals to the “broadest societal benefit”
    • This keeps it within a utilitarian framework
  • Option D:

    • Claims to be deontological
    • Yet still focuses on achieving the highest total benefit
    • This contradicts deontological ethics and aligns with utilitarianism instead

Final Answer

Correct Answer: Option C

Question 2

Which one of the options below best summarises the passage?

The passage rejects formal methods in principle. It holds that moral judgement cannot be expressed in disciplined terms and concludes that AI should not serve in courts, medicine, or diplomacy under any conditions.
The passage weighs the appeal of an impersonal AI judge against doubts about moral grasp. It claims codified schemes retain case nuance at scale and uses a physics analogy to predict convergence on a unified framework.
The passage weighs the appeal of an impersonal AI judge against doubts about moral grasp. It warns that codification can erode case-sensitive judgement, allow axiom-led reasoning at scale, and use a physics analogy to model structured plurality.
The passage highlights administrative gains from automation. It treats reproducing human moral judgement as progress and argues that, as computational resources increase, AI can be responsible for decision-making across varied institutional settings.
Solution:

Main Idea of the Passage

The passage makes three interconnected points about ethics and AI:

  • It explains why an impersonal AI moral judge is appealing, especially in situations that demand consistency and neutrality.
  • It raises a core concern: moral judgement relies on intuition, history, and context, and reducing ethics to fixed rules may strip away this depth.
  • It acknowledges that ethics has always involved formal systems, such as utilitarianism, and uses a physics analogy to show that multiple formal systems can coexist without collapsing into a single, final theory.

The passage does not fully endorse or reject AI-based moral judgement; instead, it highlights the tension and diversity within ethical reasoning.


Explanation of the Correct Answer

Why Option C Is Correct

Option C best captures the passage’s balanced argument because it:

  • Explains why an impersonal AI judge can seem attractive
  • Points out the risk that codifying ethics may erode case-sensitive judgement
  • Recognizes that principle-based reasoning can still apply across many contexts
  • Reflects the physics analogy, showing that ethical systems can be:
    • Formal
    • Coherent
    • Yet plural and non-unified

This matches the passage’s emphasis on structured variety rather than a single moral framework.


Why the Other Options Are Incorrect

  • Option A:

    • Goes beyond the passage by claiming it rejects formal methods entirely
    • Says AI should never be used in courts, medicine, or diplomacy
    • The passage does not take such an absolute position
  • Option B:

    • Incorrectly claims codified systems preserve all case details
    • Misreads the physics analogy as predicting a single unified ethical theory
    • The passage explicitly argues the opposite
  • Option D:

    • Treats AI replication of human judgement as clear progress
    • The passage remains critical and questioning, not celebratory

Final Answer

Correct Answer: Option C

Question 3

The passage compares ethics to physics, where different theories apply to different aspects of a domain and says AI can reason from fixed starting points in complex cases. Which one of the assumptions below must hold for that comparison to guide practice?

There is a principled way to decide which ethical framework applies to which class of cases, so the system can select the relevant starting points before deriving a recommendation.
Real cases never straddle different areas, so a case always fits exactly one framework without any overlap whatsoever.
A single master framework replaces all others after translation into one code, so domain boundaries disappear in application.
Once formalised, all ethical frameworks yield the same recommendation in every case, so selection among them is unnecessary.
Solution:

Main Idea of the Passage

The passage argues that ethics, like physics, contains multiple formal theories, each built from different starting principles. It suggests that:

  • AI can reason effectively within such formal systems
  • Ethical reasoning does not rely on a single universal framework
  • Different ethical theories may apply to different kinds of cases
  • The physics analogy shows that plural theories can coexist without being unified

For this analogy to work in practice, there must be a way to determine which ethical framework applies in a given situation.


Explanation of the Correct Answer

Why Option A Is Correct

Option A fits the passage because it addresses a necessary condition for the comparison to physics to be practically meaningful:

  • If multiple ethical frameworks exist, AI must know which one to start from
  • This requires a principled method to decide which framework applies before reasoning begins
  • This mirrors physics, where:
    • Different theories apply to different domains
    • The choice of theory comes before formal reasoning

Without this assumption, AI would have no way to select the appropriate “starting points,” undermining the passage’s analogy.


Why the Other Options Do Not Fit

  • Option B:

    • Assumes real cases fall neatly into non-overlapping ethical frameworks
    • The passage instead stresses complexity and divergence, not clean separation
  • Option C:

    • Assumes ethical theories must be unified into a single framework
    • This directly contradicts the passage’s use of physics to argue for pluralism without unification
  • Option D:

    • Assumes all ethical theories lead to the same conclusions
    • The passage explicitly notes that different theories can:
      • Diverge
      • Justify similar actions in different ways

Final Answer

Correct Answer: Option A

Question 4

All of the following can reasonably be inferred from the passage EXCEPT:

by analogy with physics, compact postulates can yield broad predictions across incompatible theories and ethics can likewise share structure while continuing to diverge rather than close on a single comprehensive framework.
encoding ethics into fixed structures risks stripping away intuition, history, and context and, if that occurs, the depth that enables reflective judgement disappears. So, machines would mirror our limits rather than exceed them.
the appeal of an AI judge rests on immunity to bribery, partiality, and fatigue; yet the text questions whether procedural cleanliness amounts to moral understanding without lived context and interpretive depth.
with fixed moral starting points and expanding computational resources, the argument forecasts convergence on one ethical system and treats contextual judgement as unnecessary once formal reasoning scales across domains and cultures.
Solution:

Main Idea of the Passage

The passage explores the relationship between ethics, formal systems, and AI. Using an analogy with physics, it argues that:

  • Multiple ethical theories can exist side by side
  • These theories may be formally structured yet still incompatible
  • Moral judgement depends on context, intuition, and lived experience
  • Formalising ethics risks losing depth, even if it brings clarity or consistency

The passage consistently resists the idea of a single, unified ethical framework.


Evaluation of the Options

Why Options A, B, and C Can Be Inferred

  • Option A:

    • Follows directly from the physics analogy
    • The passage notes that physical theories are incompatible yet structured
    • Ethical theories are described in the same way, supporting pluralism rather than convergence
  • Option B:

    • Strongly supported by the passage’s warning that formalisation may:
      • Flatten moral judgement
      • Strip ethics of intuition, history, and context
    • The idea that machines may mirror human limits aligns with this concern
  • Option C:

    • Reflects the central tension raised early in the passage
    • AI appears appealing because it seems neutral and unemotional
    • The author questions whether this neutrality equals genuine moral understanding

Why Option D Cannot Be Inferred

  • Option D:
    • Predicts a convergence toward a single ethical system
    • This directly contradicts the passage’s use of physics to show that:
      • Formal systems can coexist
      • Context remains essential
      • Unification is neither expected nor necessary

The passage repeatedly argues against convergence, making this option unsupported.


Final Answer

Statement That Cannot Be Inferred: Option D