Academic use of AI by Bangladeshi students
Published: Jan 2025
A TEJ Intelligence research report on how students use AI chatbots for studying, what patterns emerge, and how educators can respond with practical guardrails.

Executive summary
Students are rapidly adopting AI chatbots as “always-available helpers.” The most common value comes from explanation, translation, summarization, and practice—but the biggest risks come from unverified answers, shortcut behavior, and weak citation habits.
- The same tool supports both learning and shortcuts; outcomes depend on prompts, constraints, and evaluation design.
- Students tend to trust fluent output too quickly—especially when there’s time pressure or limited teacher feedback.
- The most effective classroom response is not banning: it’s teaching verification, disclosure, and “AI-as-tutor” prompting patterns.
- Institutions can reduce misuse by changing assessment formats (process-based grading, oral checks, and evidence requirements).
Background
Generative AI has lowered the cost of getting an “answer-shaped” response. For students, that changes how they prepare for exams, complete homework, and practice skills. In Bangladesh, the impact is shaped by access to devices, connectivity, English proficiency, and the structure of assessments.
Research questions
- What study tasks do students use AI chatbots for most frequently?
- What prompts lead to real learning (vs quick completion)?
- Where do students over-trust outputs, and how do they validate?
- How do assessment formats influence misuse?
- What practical policies work for schools without strong enforcement capacity?
Method (report format)
This report is written as a field-oriented research brief. It combines (1) practical observations of common student workflows, (2) synthesis of recurring patterns educators report, and (3) a structured framework for evaluating “learning vs shortcut” behavior.
We map how students go from assignment → prompt → output → submission, including where verification does (or doesn’t) happen.
We categorize prompts by intent: tutoring, drafting, translation, answer-generation, and “show the steps,” then evaluate the learning signal.
Key findings (qualitative)
The highest-value use is “explain this like I’m new,” especially in math/science and English writing. When prompts ask for step-by-step reasoning and self-quizzing, the outcome looks like tutoring.
When deadlines are close, prompts trend toward “give the final answer” or “write the full assignment.” The tool becomes a completion engine—not a learning engine.
Students often treat fluent output as correct. Without explicit requirements to cite sources, show steps, or cross-check, hallucinations can slip into submissions.
Translation and paraphrasing are major drivers—especially where students must submit English writing. This can either support learning or produce “surface-level” writing without comprehension.
Implications
- AI doesn’t remove the need for fundamentals; it raises the bar for assessment design.
- Schools need “verification literacy” (how to check, cite, and disclose AI usage), not just detection.
- Policy should target behaviors (plagiarism, fabrication) rather than tools.
Recommendations
- Ask for steps, not just answers (“teach me, then quiz me”).
- Always cross-check important facts (textbook/teacher/source links).
- Write a short “what I learned” reflection to force understanding.
- Grade process: drafts, reasoning steps, and checkpoints.
- Require citations and disclosures when AI is used.
- Use oral follow-ups or short in-class checks for authenticity.
Limitations
This report is intended as a practical brief. The claims are presented without statistics on this public page. If you need a version with full methodology, instruments, and supporting evidence for citation, request the full pack.


