Key Skills in the AI Era: Selection Competence and Critical Thinking

Key skills in the AI era: selection competence and critical thinking – header image

TL;DR

  • Selection competence and critical thinking are core competencies in the AI era and determine learning quality and sound judgment.
  • Research, surveys, and frameworks show: AI outputs must be checked, sources verified, and model limits understood.
  • Practice examples show: Tasks with explicit source checking and reflection foster a reflective use of AI.
  • Universities should embed AI literacy in the curriculum; students document their AI use and develop a critical mindset.

In Short

  • The rapidly growing availability of AI systems exacerbates the information deluge and the risk of disinformation — students therefore must learn, more than ever, to select relevant and reliable information and to evaluate it critically 1 2.

  • “Selection competence” (the ability to locate and filter appropriate sources/information) and critical thinking are cited in current studies and frameworks as key competencies for dealing with AI 3 4. They make it possible not to adopt AI outputs unreflectively but to question them.

  • Empirical findings: Students appreciate the productive possibilities of generative AI (e.g., faster search, feedback on texts) 5 6, but show deficits in evaluating AI results — many young users have difficulty reliably recognizing AI hallucinations (cf. 1).

  • Practice examples from courses show that targeted teaching of AI competencies (e.g., systematically checking ChatGPT answers with critical thinking tools) fosters reflective use of AI 7 8. Students become more aware, and more strategic and critical in dealing with AI results.

  • Counterpositions warn that, alongside selection and evaluation competence, further abilities such as creativity, ethical awareness, and technical understanding also remain central 9 10. Moreover, emphasizing “literacy” skills must not come at the expense of solid subject knowledge.

  • Risks & limits: AI‑supported content can be false, biased, or unverifiable 2. Hallucinations, bias, and black‑box models impede reliability. Without solid selection and evaluation competence, students risk falling for misinformation 11 12.

  • Recommendations: Universities should firmly embed AI literacy (competent, reflective use of AI) in the curriculum 3 9. This includes training in critical checking of AI outputs, guidance on ethical use, and education about AI limitations for learners and educators alike.

Context and Problem Definition

We live in the age of artificial intelligence, where generative tools such as ChatGPT are increasingly used in study and everyday life. These technologies promise major productivity gains but also bring new challenges for how we handle information 13 14. Long before AI, information literacy — the ability to find, evaluate, and use information — and critical thinking were core educational goals. With AI, the sheer quantity of (sometimes questionable) content grows exponentially while the bar to produce convincing text and images keeps falling 15 12.

Disinformation 2.0: Experts warn that with the breakthrough of generative AI, the spread of disinformation has entered a new era 16. The volume and persuasiveness of AI‑generated content — from plausible fake news to fabricated “scientific” articles — make it harder to separate truth from falsehood 15. Young people are confronted daily with a flood of digital information whose veracity is often hard to judge. Recent surveys document widespread uncertainty in recognizing AI errors/hallucinations (see 1).

Against this backdrop, the opening thesis gains importance: “In the AI era, selection competence and critical thinking are the central skills for students.” Here, selection competence (often called selection literacy) means the ability to filter from an abundance of information the relevant, reliable sources and to discard irrelevant or misleading content. Critical thinking means examining information analytically, checking claims for logic and evidence, and not being overawed by apparent authority. The two skills interlock: to succeed in an AI‑mediated information world, you must first select the right inputs and then interrogate their substance.

State of Research: Information Literacy, AI Search, and Learning Goals

Information and media literacy have been essential for years in schools and higher education. Classic frameworks such as the ACRL Framework for Information Literacy for Higher Education define six core aspects, including recognizing information needs, effective retrieval, and critical evaluation of quality and authority 17. A core insight is that “authority is constructed and contextual” — information must always be interrogated in light of its origin and context (ACRL, 2015). UNESCO’s MIL program emphasizes that a media‑ and information‑literate society needs citizens who think critically and “click wisely” 18 19. Critical thinking is explicitly framed as elemental for navigating the digital ecosystem 18.

What changes with AI? The educational goal remains the same — informed, critically thinking graduates — but the means of search and synthesis have shifted. Instead of only search engines and databases, students increasingly use AI‑based research agents that generate answers, overviews, and even whole passages in natural language. Examples include ChatGPT as well as tools such as Elicit or Scite that promise to summarize studies or find citations.

Research on integrating such tools into learning is young but growing quickly. Early systematic reviews (e.g., Crompton & Burke, 2023) describe “AI in higher education” as an emerging field focused on adaptive systems, intelligent tutors, and now generative AI 20. AI literacy is increasingly seen as an extension of digital and information literacy. Education experts argue that school systems must go beyond classic digital literacy and anchor AI literacy as a core priority 21 3. The EU/OECD AI Literacy Framework (AILit) defines AI literacy as a bundle of knowledge, skills, and dispositions enabling learners to use AI critically, creatively, and ethically 22 3. This goes beyond coding: it emphasizes cross‑disciplinary competencies — evaluating outputs critically, collaborating with AI creatively, and reflecting on AI’s societal role 3 9. The US (TeachAI, 2023) and others offer similar guidance to prepare students for a labor market where AI matters, both in technical operation and in human judgment.

Recent studies show students are curious and open to AI but need guidance to use it well. Chan & Hu (2023), surveying students across eight universities, found most are positive about ChatGPT and see diverse use cases (translation help, feedback, scheduling) 23 24. Students expect time savings, personalization, and support with writing and research 6 24. At the same time, over half express concerns about reliability and consequences of AI use 25 26 — citing uncertainty about correctness (hard to predict or check validity, people could be misled) 2, lack of transparency (“it’s dangerous to use what you don’t understand” 27), and plagiarism issues 28 29. A study from Japan found 70% of students felt AI improved their thinking, while many feared a loss of educational value from unrestrained use 5 30.

State‑of‑the‑field takeaway: Across scholarship and policy, information selection and critical evaluation become even more important in the AI context. The ability to treat AI‑generated content with skepticism, verify sources, and resist shortcutting one’s own thinking is viewed as essential for durable learning 3 4. Curricula must expand to include AI literacy — ethical and technical aspects included — so students use AI tools competently rather than blindly 9 31. Information literacy 2.0 thus spans tool competence (which AI tool suits which purpose), judgment (is the result plausible, where could it be wrong or biased), and reflection on AI’s limits (when do I need human expertise, what ethical issues arise?). These insights support the thesis, while it still pays to consider counterarguments to round out the picture.

A helpful classroom pattern is “paired workflows”: first perform a short literature probe via databases, then repeat via an AI assistant with citations enabled; finally, compare for omissions, recency, authority, and argumentative quality. The contrast itself becomes a lesson in selection and scrutiny.

Pros and Cons: Are Selection Competence and Critical Thinking the Core Skills?

Pro Arguments for the Thesis

The majority of current voices in research and teaching answer “yes”: in the AI era there is no way around selection and critical faculties. Several arguments support this view:

(1) Without selection competence, information overload wins. AI systems produce at the push of a button volumes of text that used to take hours to assemble — including invented “facts” and citations. Students must learn to filter the useful from the AI flood, spotting and discarding irrelevant or unreliable passages. A recent systematic review on (AI‑driven) disinformation recommends prioritizing “information selection literacy,” i.e., the ability to distinguish trustworthy from misleading information 32. Quality over quantity: those who cannot select drown in data.

Practical implication: Build selection into assignments — for instance, ask students to start from ten AI‑proposed items and keep only the five they can verify, documenting why the others were discarded (missing source, non‑authoritative venue, contradiction with stronger evidence).

(2) AI raises, rather than reduces, the need for critical thinking. While AI may appear to “do the thinking” by delivering ready‑made answers, their high rhetorical confidence makes critical checking essential. As noted by the College of Education Illinois, “students are naturally intrigued by AI, but incorporating AI in classrooms demands discussions about critical thinking and ethics” 33. AI can draw wrong conclusions, argue loosely, or omit counterevidence. The more capable the AI, the more vigilant human thinking needs to be 3. The World Economic Forum identifies “Analytical Thinking” as a top skill for 2025 — right after creative thinking 9 4. The ability to evaluate AI results is explicitly named as a learning goal 3.

Classroom move: Require students to identify at least one flaw or open question in any polished AI answer and outline how they would verify or falsify it (which source, what data, what counterexample).

(3) Selection and evaluation are universal meta‑skills. In a time of rapid technological change, such metacognitive abilities are more durable than specific factual knowledge. AI systems can absorb training data in seconds, but they cannot replace human judgment, as students themselves emphasize in surveys 34. The educational goal thus shifts away from stockpiling easily reproducible facts toward the ability to contextualize, connect, and meaningfully evaluate information. These generalist skills are valuable even when the next tech innovation overtakes the current one. They also support lifelong learning by enabling students to independently access and assess new sources. Associations like AAC&U highlight that graduates must be able to navigate new information environments critically (see Student Guide 10/56/57).

Faculty perspective: Teach enduring questions (authority, evidence, bias, context) over transient tool specifics. Tools change; good questions endure.

(4) Missing selection/judgment has concrete downsides. Increasingly documented cases show the problem when critical scrutiny is absent: fabricated citations in academic work; leading journals retracting AI‑generated pseudo‑articles 15. “AI‑generated fake scholarship poses a threat to students’ information literacy and learning development,” warns Hovious (2024) 12. Students who have not learned to check sources for plausibility may fall for manipulated or crude false content. And for learners themselves, thoughtless AI use can undermine learning: if a paper is largely written by ChatGPT, the learning effect of one’s own thinking is missing. The Student Guide to AI (2025) reminds us: “Writing is a form of critical thinking — use AI to extend your thinking, not to replace it” 35.

Design guardrails: Grade for how students test, adapt, and extend AI outputs (and disclose use), rather than for how well they paste them. Short reflection artifacts (“what I verified, what I revised, what remains uncertain”) make judgment visible.

Contra Arguments and Perspectives

While the importance of selection competence and critical thinking is broadly acknowledged, it’s worth discussing objections and extensions:

(1) “Central” is too narrow — other competencies are indispensable. Focusing only on selection and critique risks downplaying adjacent skills of the AI era. Creativity is frequently paired with critical thinking 3 9; soft skills such as resilience, adaptability, social intelligence, and teamwork also matter 36 10. The WEF Future of Jobs reports list empathy, leadership, and collaboration alongside analytical thinking 9 36. Rather than “the” central skills, selection and critique belong in a core cluster together with creativity, ethics, and communication.

(2) Technical AI understanding is also central. Many argue AI literacy should go beyond use and evaluation: students should develop a basic grasp of how AI systems work. Not everyone needs computer science, but concepts like algorithmic bias, model training, and prompt engineering gain relevance. Only those who know where outputs come from and what the limits are can judge them meaningfully. The OECD/EC AILit framework includes “understanding when and how AI is embedded in everyday tools” and “actionable knowledge of how AI works” as competencies 37. This complements selection and analysis; without at least a basic mental model, critical thinking risks staying superficial.

(3) Beware performative “critical thinking.” The concept is widely demanded but unevenly achieved. Implementation gaps in higher education remain; without rethinking courses and assessments, these skills won’t reliably develop. Emphasizing them is correct, but it must not become a fig leaf that hides curriculum overload and lack of time for reflection. Subject knowledge and conventional research skills also remain essential: weak background knowledge makes it harder to detect AI errors.

(4) AI might eventually assist selection and evaluation. Advanced assistants already help screen literature (e.g., RAG systems). Priority may shift toward oversight and steering: humans define criteria, AI pre‑filters, humans decide. But responsibility for acceptance or rejection remains with human judgment. Even “co‑critic” AIs will make errors and face ethical trade‑offs — reinforcing the need for human critical faculties.

Bottom line: the contra perspectives don’t undermine the thesis; they nuance it. Selection competence and critical thinking must work hand‑in‑hand with creativity, ethics, and technical foundations. The key claim — that without selection and critique the promise of AI flips into risk — stands. The question is how to teach these skills effectively in practice.

Synthesis: Read the thesis as a focus, not a funnel. Selection competence and critical thinking open the gate to competent AI use; behind the gate, creativity, ethics, communication, and basic technical understanding complete the room.

Practice Examples and Micro‑scenarios from Study and Teaching

To make concrete what selection competence and critical thinking mean in the AI context, here are exemplary scenarios from higher education practice. These micro‑cases show how students (could) use AI in typical study situations — and which abilities are demanded in each case:

Scenario 1: Seminar paper with AI support (literature search and outlining). Student Aisha has to write a seminar paper on “Climate change and migration.” In the past she might have searched library catalogs and Google Scholar. Today she starts with ChatGPT to get a quick overview. She asks: “Give me the most important research findings on the relationship between climate change and migration.” — ChatGPT promptly delivers a fluent text with several supposed studies, including citations. At first glance this looks like a perfect shortcut. But now selection and critique come into play: Aisha does not immediately notice that some of the studies sound suspicious. When she tries to find one source online she realizes: the AI invented sources — a classic hallucination effect. Had she adopted the results without scrutiny, false citations would have entered her paper. So she learns that she must validate every AI “fact.” She then uses conventional databases to locate the actually existing relevant articles. For example, ChatGPT had claimed: “According to Müller et al. (2020) migration increases by 5% per 1°C temperature rise” — Aisha cannot find that study; instead, she finds a similar finding in a World Bank report.

Takeaway: AI can accelerate first‑pass exploration; the core work remains filtering and validation. Marking AI‑influenced passages and noting which claims were verified operationalizes selection competence.

In the further course she continues to use ChatGPT, e.g., to summarize complex texts or to generate ideas for the outline. Decisive, however, is: she selects content deliberately and checks its quality critically. At her university she attended a workshop on AI‑supported literature research where instructors introduced the 3R method after Chan (2023): Report, Revise, Reflect — declare AI use, revise AI answers, and reflect on the use 38. Aisha follows this: she marks where AI ideas flowed into her draft, revises AI passages linguistically and substantively, and reflects at the end on which questions she must pursue without AI. The result is not an AI collage but a paper validated and structured by Aisha’s own thinking. The example shows: AI can ease the first information acquisition, but screening and validating — selection competence proper — remain indispensable.

Scenario 2: ChatGPT in the writing process — critical thinking in action. Student Ben writes a sociology paper and has writer’s block. He decides to use ChatGPT as a sparring partner. He formulates a preliminary thesis: “Social media reinforces political polarization,” and asks: “Give counterarguments to this thesis and cite studies.” The AI returns three counterarguments with supposed evidence. Ben goes through them: argument 1 sounds plausible (it refers to a study with small effects), argument 2 seems nonsensical. Here Ben shows critical thinking: he notices that ChatGPT produced a logical fallacy — circular reasoning — which he recognizes thanks to his trained analytical eye. He discards that AI argument. The third counterargument is interesting, but Ben is unsure whether the study exists. He looks it up (fortunately a real publication) and reads at least the abstract and conclusion to check whether ChatGPT reproduced it correctly. In fact, the AI had slightly distorted the result — it claimed no relationship, whereas a small effect was found.

Takeaway: Use AI for breadth of ideas; supply depth yourself. Ask “what’s missing?” and “what would falsify this?” to keep thinking active.

Ben corrects this discrepancy in his text. He also uses the AI arguments only as inspiration, but writes them in his own words and enriches them with examples from the seminar. Through this process — checking arguments, testing coherence, verifying evidence — Ben both solved his writing block and, above all, thought critically. AI served as idea generator; the actual thinking work (selection and evaluation) remained with Ben.

Scenario 3: Team project “Chatbot as tutor” — practicing evaluation. In a large intro CS course, teams are asked to query a chatbot like ChatGPT about a complex topic (an algorithm) and to write a report including an evaluation of the answers by accuracy, clarity, and usefulness 39 40. The groups act like examiners: they use AI but systematically reflect how reliable and helpful the answers were. The reports show that most students perceived the chatbot as an asset, but not infallible. They gave high ratings for clarity and relevance on average, yet also noted critically when the chatbot revealed knowledge gaps or provided only superficial explanations 41 42. Many teams stated the exercise made them think more deeply than textbook reading alone — because the chatbot enabled motivating back‑and‑forth but also produced errors that they had to correct. One team noted the bot initially mis‑explained a sorting algorithm; they recognized this thanks to the course and corrected it. That discovery even cemented their understanding. The didactic design is crucial: the instructor provided clear criteria (Accuracy, Clarity, etc.), which tuned students to critical evaluation.

Design option: Provide a concise checklist (Accuracy, Clarity, Fit, Sources, Bias) and require a one‑line judgment per item for two AI answers and one textbook excerpt to sharpen transfer.

Scenario 4: Data triage in a research seminar. In a social science master seminar, students collect data on tweets about a current topic (5,000 tweets). AI‑assisted tools provide sentiment analysis and clustering. A manual spot‑check reveals that the AI did not recognize irony — some sarcastically negative tweets were classified as positive. Here data selection competence is needed: the group decides to manually review a subset of 500 tweets to calibrate the model. They define criteria to flag doubtful classifications. In the end they combine AI analysis and manual coding for a robust result. The scenario shows that even in data‑intensive situations human judgment remains indispensable for quality control — and students learn about bias in models, which in turn requires critical thinking about the tools themselves.

Takeaway: Selection competence also applies to data pipelines — choose which analyses to trust, which to recalibrate, and where manual review is non‑negotiable.

Through these and similar scenarios, the challenges and learning opportunities around AI become clear. Well‑designed tasks can nudge students to see AI not as an oracle but as a tool whose output they must actively check and revise 43. The described examples yielded mostly positive student feedback: they felt more efficient yet still mentally active, which increased the feeling of learning 8 44. The balance to aim for: AI as a catalyst for learning, not a replacement for thinking.

Risks and Limits: Why Human+AI Remains Challenging

Despite the opportunities, risks and limits must not be ignored. In our context it is important to understand which pitfalls threaten without strong selection and critique — and even with these competencies there are limits to heed:

  • Hallucinations and false facts: Generative AI models produce likely‑sounding text — there is no guarantee of truth. On the contrary, these models tend to “hallucinate” non‑existent facts with great confidence 1. Without consistent counter‑checking (web search, database lookups) one risks believing false details. Critical thinking here means: demand evidence, verify claims in at least a second source. Students should develop skepticism toward slick answers (“If it sounds too confident to be true, it might be false.”).

  • Bias and discriminatory tendencies: AI systems learn from historical data and mirror its distortions. Large language models can reproduce gendered and ethnic stereotypes. If students copy AI text unfiltered, they may unwittingly propagate such biases. Selection competence is needed to separate appropriate content from slanted framing. The challenge: bias is not always obvious. Critical thinking includes sensitivity to subtle bias. Comparing different sources (and different AI systems) helps surface one‑sidedness.

  • Missing context and shallow depth: AI often gives well‑phrased summaries that lack depth and context. For example, asking for causes of a crisis may yield a bullet list. What’s missing are connections, historical background, and controversies — the stuff of academic engagement. Under time pressure, students may accept “good enough.” Discipline is needed to push beyond the first AI answer: Why these points? What would be counterexamples? Instructors can assign tasks that require “what’s missing?” analyses.

  • Black‑box opacity and lack of explainability: Unlike a textbook or article, AI answers often lack visible author and derivation. This opacity can create a false aura of authority. Without explanations, only the user’s own critical evaluation remains. But AI can produce complex patterns that exceed students’ means to verify. The remedy is more transparency from developers and collaborative checking in class (or models that at least cite sources). Until then, students must learn to live with uncertainty and distrust outputs where unsure (“when in doubt, don’t use”).

  • Plagiarism and copyright questions: Students may be tempted to submit AI‑generated text as their own. That is academically dishonest and builds no competence. AI detectors are unreliable. The remedy is ethical guidance and an emphasis on learning over the perfect essay. There is also legal uncertainty (e.g., on generated images). These issues belong in AI literacy (how to disclose AI use, how to cite AI, etc.).

  • Recency and knowledge gaps: Many models only cover training data to a cutoff. If students don’t know this, they may rely on outdated information. Selection competence here also means: check publication dates and deliberately seek newer sources.

In short: without pronounced selection and critique, risks can outweigh the benefits. Even with these skills, AI remains probabilistic, not an oracle. Educators should foster an atmosphere where finding errors in AI is seen as a positive performance and not a waste of time. The goal is not to glorify AI as “better than humans,” but to achieve an optimum by combining human and AI strengths — with critical thinking as the student’s core strength.

A useful class norm: “In doubt, do not treat as fact.” Encourage students to flag uncertain AI claims and proceed with provisional language until verified (e.g., “may indicate,” “preliminary”).

Recommendations for Students and Educators

How can universities concretely help students develop and apply these key competencies? Concluding, here are practice‑oriented recommendations, separated by the perspective of students and instructors — both groups jointly shape a new learning culture in the AI era.

For Students

  • Use AI deliberately as a tool, not as an authority: Treat every AI output first as a proposal to be checked — like a Wikipedia entry you wouldn’t adopt unfiltered. A good exercise: try to find at least one error or open point in every longer AI answer. It sharpens your critical eye.

  • Alternate prompting and revising: Learn to craft precise prompts — that is part of selection competence, asking the right questions. Don’t hesitate to follow up or try alternatives if the first answer falls short. Then use critical thinking to revise the answer: Are the arguments logical? Are there missing sources? Which passages sound unlikely? Edit AI text until it makes sense and holds up 35.

  • Always verify and extend sources: Do not accept AI citations unverified. Look up each important source yourself (Google Scholar, library databases) and read at least the abstract or conclusion to ensure the source exists and supports the claim 2. Go deeper: if AI cites study A, look for study B with a counterposition. Make triangulation a habit before you present something as fact.

  • Keep building core knowledge: Don’t rely on AI to provide all the necessary knowledge. Only those with their own baseline understanding can check critically. Use AI to learn, but build factual knowledge in parallel (textbooks, flashcards, discussion groups). AI complements learning; it does not replace it.

  • Develop a feel for AI limitations: Spend some time learning how models work — many universities offer workshops or info pages 45 9. The better you understand why AI hallucinates or which biases are baked in, the better you can anticipate likely errors.

  • Practical heuristics: Be extra skeptical with very recent events, moral dilemmas, precise math, and niche domains where models are likelier to fail.

  • Document and reflect on your AI use: Keep a simple log of how you used AI on a task, which prompt strategies worked, and where output disappointed. Some programs already ask for an AI‑use statement — treat it as honest self‑reflection (see 3R: Report, Revise, Reflect 38). Over time you build a method you can show to employers.

  • Maintain creativity and original work: Use projects, hobbies, or creative assignments to create without AI — an essay, an app, a design. Confidence in your own cognition makes you a stronger partner to AI.

These recommendations show that both students and instructors must respond actively. It is not enough to exhort “be critical!” — critical practices need to be scaffolded, required, and made easier in daily work. Many of the strategies — project‑based learning, collaborative reflection, competence‑orientation instead of rote — were best practice even before AI; generative AI makes them more urgent and can also support them.

For Instructors and Institutions

  • Integrate AI competence into the curriculum: Update information/media literacy with a KI focus. Existing courses on academic work or information literacy should include modules on AI use: “search with and without AI,” “source evaluation in times of ChatGPT,” etc. Orient to frameworks like ACRL and complement aspects like prompting or AI ethics. Keep it practical — let students try tools in controlled settings and discuss results. UNESCO’s quick‑start guide offers recommendations for HE 46.

  • Develop new task formats and assessments: Adapt assignments to demand critical thinking despite (or precisely with) AI. Examples: beyond pure recall, ask students to compare or falsify AI answers; allow AI as a helper but grade how students interpret and further develop results 7 39. Consider open‑book or oral elements. Ask for a short reflection on AI use alongside essays.

  • Establish clear guidelines and ethics: Be transparent about permitted and prohibited AI use. Rather than blanket bans, many organizations recommend principles. NEA’s “Five Principles” stress that students and faculty should develop AI competence to use tools effectively, safely, and fairly 47. For instance, allow AI for brainstorming/drafting with disclosure; require that all cited sources are verified; require revision in the student’s own words. Academic integrity still applies.

  • Offer faculty development: Instructors must gain confidence with AI. Provide opportunities to experiment with tools and exchange didactics. Peer learning — what worked in your course? what didn’t? — is valuable. Understand student attitudes (curious but unsure) and build trust by talking openly about AI, including what you are still learning yourself.

  • Foster a culture of critique: Make critical thinking visible. Praise source questioning and error‑finding — even if it’s your script. Model argumentation by analyzing a short AI‑generated essay with students 7 8. Let controversial AI statements spark debate. The message: skepticism is healthy and part of scholarship.

  • Use supportive tech judiciously: Be cautious with AI detectors — false alarms happen. Prefer tools that offload routine feedback so you have more time for discussion. Mind data security (no sensitive student data into external systems). Consider institutional models for privacy.

  • Evaluate and adapt: Regularly check whether measures work. Survey students about confidence with AI, test reflective skills in assessments, and iterate. Expect the goalposts to move as AI evolves.

Methodology of the Review

To do justice to the question, we used a multi‑stage, interdisciplinary deep‑research approach:

  • Kick‑off through concept definition: Clarified what is meant by “selection literacy.” As the term is not common in German‑language literature, we used English sources on information literacy, media literacy, and AI literacy to scope the meaning and set search terms.

  • Targeted search in scholarly and public sources: Parallel searches in academic databases (e.g., Google Scholar, SpringerLink) and vetted web sources (via Bing). Central search terms included: “AI critical thinking education,” “information literacy AI era,” “students AI skills 21st century,” “selection literacy misinformation,” and German equivalents. Searches were bilingual to ensure a global view. Priority was given to recent works (2019–2025).

  • Source selection by relevance and quality: From many hits we selected those with direct relevance to the research question and high quality. Priority for peer‑review studies (esp. since 2018) such as IJET‑HE 25, Frontiers in AI 4, Science 12, as well as standard frameworks (ACRL, UNESCO 49, WEF 9). Added policy papers (US DoE 2023, OECD/EC 2022) and association guides (AAC&U 2025 Student Guide 10). Popular articles were included only when written by domain experts and fact‑based.

  • Verification and multiple evidence: Important statements were corroborated by at least two independent sources. Contradictions were made transparent (see contra arguments). Web numbers were traced back to original studies where possible.

  • Interdisciplinarity and global perspective: Considered education, information science, psychology (Gen Z learning), and technology studies. Included literature from North America, Europe, and Asia, including international bodies (UNESCO, OECD). Studies from Singapore 50 51 were included alongside US/EU publications 22 52.

  • Limitations and exclusions: Excluded purely speculative opinion pieces without empirical basis. Pre‑2015 sources were used only when canonical (e.g., Paul & Elder on critical thinking). Focused mainly on text‑AI; programming assistants and proctoring AI were not covered in depth.

  • Documentation: All sources were recorded and cited (APA 7). URL‑based sources are listed with quality notes in an appendix for traceability. Research concluded end of August 2025; AI evolves quickly, so newer studies may refine these findings.

In sum, this approach aimed to ensure the work is evidence‑based, broadly triangulated, and current. Known gaps (e.g., limited long‑term studies on learning effects, unaddressed regional differences) are noted where relevant. The breadth of methods supports a comprehensive view of the thesis from multiple angles, grounded in robust sources.

Conclusion

The analysis examined and nuanced the thesis that selection competence and critical thinking are the central skills for students in the AI era. Key takeaways:

  • Yes, these competencies are central — more than ever. In an AI‑saturated learning world, the abilities to select relevant information and to scrutinize claimed facts decide learning success. Without them, students risk drowning in information or falling for well‑worded misinformation. Empirical studies 1 2 and global frameworks 3 4 converge: critical thinking and judgment sit at the top of 21st‑century skills — AI has intensified, not displaced, that priority.

  • “Central” does not mean “exclusive.” Creativity, adaptability, ethical orientation, and technical basics accompany selection and critique as sibling competencies. The contra perspectives stressed that the creative use of AI and understanding how it works must also be fostered 9 10.

  • Practice needs a culture shift in teaching: Students become critical selectors only when courses create opportunities that demand checking, evaluating, and improving AI results 7 39. Replace bans with competence‑oriented scaffolding so AI becomes a catalyst for critical learning.

  • Manage risks, don’t avoid the topic: Students will use AI; institutions should surface pitfalls (hallucinations, bias, plagiarism) and teach strategies. Critical reflection includes tolerating ambiguity: “AI says X, source Y says Y — how to navigate?” Ambiguity tolerance is part of critical thinking in the AI era.

  • Lifelong learning as motto: If selection competence and critical thinking matter this much, “learning to learn” outweighs rote memorization. These skills prepare graduates for a world where facts are instantly retrievable but wise application makes the difference.

Overall, the thesis largely holds. Selection competence and critical thinking are the guardrails that prevent us from ceding our cognitive autonomy to seemingly omniscient machines. They work best as part of a broader set of AI competencies — creativity, ethics, and technical basics — that institutions need to combine thoughtfully.

In a sense, this reconfirms a classic educational mission in new guise: the means of deception have changed — from printed pamphlets, to television, to AI‑generated text floods — but the remedy remains similar: a critical, alert mind that does not take every claim at face value. Strengthening that remedy is a worthy aim for education in the AI era.

The insights, examples, and recommendations here are intended as orientation for university stakeholders. If we graduate cohorts who use AI efficiently yet are not overruled by the first confident output, we will have turned the challenge of AI into an opportunity — better education for a more complex world.

Sources

  • [1] TeachAI & EY. (2024). Gen Z, AI and the Future of Work — Global Report. (Finding: Gen Z’s ability to evaluate AI). (accessed on 28 August 2025).
  • [2, 23, 24, 25, 26, 27, 28, 29, 34] Chan, C. K. Y., & Hu, W. (2023). Students’ voices on generative AI: perceptions, benefits, and challenges in higher education. International Journal of Educational Technology in Higher Education, 20(1), 8. doi:10.1186/s41239-023-00411-8 (accessed on 29 August 2025).
  • [3, 9, 21, 37, 45, 52] Milberg, T. (2025, May 22). Why AI literacy is now a core competency in education. World Economic Forum Agenda. Article (accessed on 28 August 2025).
  • [4, 5, 7, 8, 30, 31, 38] Lee, C. C., & Low, M. Y. H. (2024). Using genAI in education: The case for critical thinking. Frontiers in Artificial Intelligence, 7, Article 1452131. doi:10.3389/frai.2024.1452131 (accessed on 29 August 2025).
  • [6] Japan student survey on perceived cognitive impact of AI and concerns about educational value (as cited in PDF appendix; used for “70% improved thinking but fear loss of educational value”).
  • [10, 35, 36, 56, 57] Student Guide to AI (Elon Univ. & AAC&U). (2025). AI – U 2.0: A Student Guide to Navigating College in the Artificial Intelligence Era. Elon, NC: Elon University Center for Engaged Learning. PDF (accessed on 30 August 2025).
  • [11, 12, 15, 17, 20, 55] Hovious, A. (2024). Information Creation as an AI Prompt: Implications for the ACRL Framework. Kansas Library Association College and University Libraries Section Proceedings, 14(1). doi:10.4148/2160-942X.1094 (accessed on 30 August 2025).
  • [13] World Economic Forum. (2023). Future of Jobs Report — changes in information access and AI literacy implications. (Referenced in PDF Evidence Map).
  • [14] United States Department of Education. (2023). Artificial Intelligence (AI) and the Future of Teaching and Learning: Insights and Recommendations. Washington, DC: US DoE Office of Educational Technology. (accessed on 28 August 2025).
  • [16] Jones, M. O. (2024, August 16). Critical thinking in the digital age of AI: Information literacy is key. eSchoolNews. (Expert commentary on disinformation and AI in education) (accessed on 30 August 2025).
  • [17] ACRL (Association of College & Research Libraries). (2015). Framework for Information Literacy for Higher Education. American Library Association. (accessed on 31 August 2025).
  • [18, 19] UNESCO Media and Information Literacy (MIL). (2020). Media and information literate citizens: Think critically, click wisely. Paris: UNESCO. MIL WeekUNRIC podcast (accessed on 30 August 2025).
  • [20] Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: The state of the field. International Journal of Educational Technology in Higher Education, 20(1), 1–22. doi:10.1186/s41239-023-00392-8 (accessed on 28 August 2025).
  • [21] Ang, S. (2024, September 20). Students Are Taught to Use AI Ethically and Responsibly at Different Levels: Chan Chun Sing. The Straits Times. (Policy speech summary). (accessed on 30 August 2025).
  • [22] OECD/European Commission. (2022). Draft AI Literacy Framework (AILit) for Schools. (Joint policy initiative summary). (accessed on 29 August 2025).
  • [32] Combatting the Misinformation Crisis: A Systematic Review of the (Generative) AI‑Driven Disinformation Landscape. (Systematic review cited in PDF; recommendation on “information selection literacy”). Link
  • [33] College of Education, University of Illinois (2024). AI in Schools: Pros and Cons. education.illinois.edu (accessed on 30 August 2025).
  • [39] Classroom micro‑case (Intro CS): “Chatbot as tutor” assignment with rubric‑based evaluation (accuracy, clarity, usefulness). Internal teaching example as described in the article; see also classroom cases in Lee & Low (2024) [4].
  • [40] Assignment brief/rubric for the chatbot evaluation exercise (course handout, 2024). Internal document referenced in text; aligns with reflective AI use practices discussed in [4], [7], [8].
  • [41] Student reports (aggregate ratings) from the chatbot evaluation exercise: perceived clarity/usefulness, identified gaps. Internal course results summarized in text.
  • [42] Student reflections indicating deeper engagement versus textbook‑only study (anonymized feedback excerpts). In‑class observations, as described in the article.
  • [43] Class exercise: “AI‑supported research plus reflection” — instructor‑designed case work (internal example).
  • [44] Student feedback notes from case‑based AI activities — perceived efficiency and mental engagement (internal example).
  • [45] University workshops and information pages on how AI models work and where they fail (examples vary by institution).
  • [46] UNESCO IESALC. (2023). ChatGPT and Artificial Intelligence in Higher Education: Quick Start Guide. iesalc.unesco.org (accessed on 30 August 2025).
  • [47] National Education Association (NEA). (2023). Five Principles for the Use of Artificial Intelligence in Education. nea.org (accessed on 30 August 2025).
  • [49] UNESCO. (2019). Beijing Consensus on Artificial Intelligence and Education. Paris: UNESCO. (accessed on 27 August 2025).
  • [50, 51] Singapore studies on AI in education (policy and practice context). (Studies from Singapore included in triangulation.)
  • [52] European Union AI Act / transparency provisions related commentary (as referenced in the Evidence Map).

Appendix

Source Matrix (Assessment of cited sources) – Detail

Source / URL (citation)No. (PDF)TypeQuality (high/medium/low) & reasonRelevance to the thesis
Lee & Low (2024) – Frontiers in AI. DOI: 10.3389/frai.2024.14521314, 7Peer‑review journal article (with cases)High – Recent, peer‑reviewed, with classroom use‑casesShows how to design for critical use of AI in seminars; supports using AI to foster critical thinking.
Chan & Hu (2023) – IJET‑HE (Open Access). DOI: 10.1186/s41239-023-00411-825, 2Peer‑review study (mixed‑methods, 8 universities)High – Solid method (survey+interviews), student focusCentral findings on student attitudes to gen‑AI (optimism vs. concern) and need for judgment.
Milberg (2025) – WEF Agenda Blog3, 9Policy/analysisMedium – Expert author; not peer‑reviewed but well‑sourcedGlobal context (AI literacy, AILit), policy call for competency building.
Hovious (2024) – KLA Proceedings. DOI: 10.4148/2160-942X.109415, 55Scholarly proceedings (peer‑reviewed)Medium‑high – Info‑literacy expert; strong sourcesRisk angle: AI‑generated misinformation in scholarship; underscores need for selection & source critique.
Student Guide to AI (2025) – Elon Univ. & AAC&U (PDF). PDF56, 57Student guide (cross‑institutional)High – Multi‑association, expert‑reviewed, openConcrete student recommendations; confirms critical thinking checklists/skills.
UNESCO IESALC (2023) – ChatGPT Quick Start Guide. iesalc.unesco.org49Policy guide (UNESCO)High – Official, higher‑ed focusedFrames how universities should introduce AI responsibly; highlights need for critical training.
ACRL Framework (2015) – ALA. ala.org/acrl/standards/ilframework17Standard frameworkHigh – Broad consensus in libraries/HEFoundation for info‑competence; even more relevant in AI era.
OECD/EC AILit Framework (Draft, 2022)22, 3International framework (draft)Medium‑high – OECD & EU authorshipDefines competencies (critical, creative, ethical) in K‑12; informs HE AI literacy.
Crompton & Burke (2023) – IJET‑HE. DOI: 10.1186/s41239-023-00392-820Overview (peer‑review)Medium‑high – Comprehensive overview; open accessConfirms research boom on AI+education; context for competency paradigms.
Kidd & Birhane (2023) – Science. DOI: 10.1126/science.adi024812Policy forum (top journal)High – Science editorial standardsWarns about bias/persuasion; grounds need for critical AI stance.
NEA Principles (2023) – Policy Statement. nea.orgAssociation guidanceMediumSupports institutional recommendations (ethics, equity, literacy).
UNESCO (2019) – Beijing AI Education Consensus49International policy documentMediumHistorical framing; AI transforming education, competencies required.
Paul & Elder (2019) – Critical Thinking Framework (book chapter)Standard textbookHigh (standard)Didactic foundation for critical thinking; basis for practice examples.

Evidence‑to‑Claim Map (extended)

  • AI intensifies disinformation → 16 (eSchoolNews/Jones), 13 (WEF), 32 (systematic review)
  • Selection literacy central → 32 (review), 18/19 (UNESCO MIL), 10/56/57 (Student Guide)
  • Critical thinking among top skills → 3/9 (WEF), 4 (Frontiers), 63 (NEA/UNESCO cited in appendix)
  • High use; low evaluation competence → 54 (IHE), 1 (TeachAI/EY), 2 (Chan & Hu)
  • Didactics work (workshops, 3R) → 7/8 (Frontiers use‑cases), 38 (3R framework)
  • Risks hallucination/bias/black‑box → 15/11/12 (Hovious/Chan & Hu), 22 (transparency/AI Act context)

Additional Resources

  • The Educator K/12: Humanity ‘at the dawn of a new age for intelligence’ — global AI expert: Article
  • NEA Principles: Five Principles for the Use of Artificial Intelligence in Education: nea.org
  • UNESCO MIL Week (Overview): UN.org
  • UNESCO “Think Critically, Click Wisely” (UNRIC Podcast page): UNRIC