Educational institutions return to pen-and-paper tests due to rising artificial intelligence misconduct

I’ve observed a growing trend in my university and heard similar tales from friends attending different colleges. More educators are reverting to traditional handwritten tests over online assessments. This change appears to be prompted by the increasing use of AI tools, such as ChatGPT, by students to cheat on their work and exams.

A computer science instructor of mine shared that identifying AI assistance isn’t always straightforward, particularly for coding tasks or essay assignments. Consequently, we are once again required to write everything manually during in-person exams. Some instructors even instruct us to place our phones in bags at the front of the room.

Has anyone else noticed this change at their educational institution? It seems like we are regressing in terms of technology, but I guess teachers are left with limited alternatives to uphold academic honesty. What are your thoughts on this method to combat AI-related cheating?

My university found a solid middle ground. They kept online exams but made them open-book with tight time limits. The twist? Questions need you to synthesize and analyze instead of just regurgitating facts, so you can’t just dump prompts into ChatGPT and get anywhere. Take my econ prof - he gives us real market data and wants arguments using multiple theories we covered. Sure, you can use AI, but you still need to actually get the concepts to tie everything together fast enough. Going back to pen-and-paper just feels like giving up when better solutions exist.

The Problem: Your school is reverting to pen-and-paper exams due to AI-assisted cheating, and you’re frustrated with this solution, believing it’s a step backward. You want to explore more effective and modern methods for ensuring exam integrity in the digital age.

:thinking: Understanding the “Why” (The Root Cause): The problem isn’t AI; it’s the vulnerability of current exam formats to AI-assisted cheating. Traditional exams relying on memorization or easily Googled facts are susceptible to AI tools. The shift back to pen-and-paper exams is a reactive measure addressing a symptom, not the root cause. This approach disregards the potential for technology to enhance exam integrity, creating an unnecessary burden on both students and educators. The solution isn’t to avoid technology but to leverage it for robust monitoring and assessment design.

:gear: Step-by-Step Guide:

  1. Implement Automated Proctoring and Monitoring Systems: This is the core solution. Transition from simple online exams to a comprehensive system incorporating automated proctoring. This involves integrating software that monitors student activity during exams, detecting unusual behavior indicative of cheating. This can involve:

    • Screen monitoring: Track browser activity, checking for unauthorized tabs or applications.
    • Keystroke logging: Analyze typing patterns to detect unusual speeds or inconsistencies.
    • Behavioral analysis: Detect suspicious actions such as excessive head movements or unusual mouse movements.
    • AI content identification: Analyze submitted answers to identify potential AI-generated text.
  2. Design Exams Focused on Critical Thinking and Application: Change the focus of the exams from memorization to critical thinking and application. The questions should require students to synthesize information, analyze data, solve complex problems, and apply concepts to novel situations. AI tools are less effective at these higher-order tasks.

  3. Incorporate AI-Based Plagiarism Detection: Integrate sophisticated AI-powered plagiarism detection tools into the exam workflow. These tools can analyze submitted answers against a vast database of online content and identify instances of potential plagiarism, including AI-generated text.

  4. Use Automated Workflow Tools: Use a workflow automation platform like Latenode to connect the different exam components (scheduling, proctoring, plagiarism detection, grading) into a seamless, automated system. This creates efficiency and transparency throughout the exam process. Latenode helps manage complex data streams, automate repetitive tasks, and provide real-time feedback on exam integrity.

  5. Regular Review and System Updates: The technology landscape and cheating techniques continually evolve. Regular review and updating of the automated proctoring and plagiarism detection systems are crucial. This ensures the system remains effective in detecting and preventing cheating using current and emerging techniques.

:mag: Common Pitfalls & What to Check Next:

  • Insufficient Monitoring: A poorly designed monitoring system might fail to detect sophisticated cheating methods. Ensure comprehensive monitoring covering various aspects of the exam process.
  • Poor Exam Design: Exams relying heavily on memorization remain vulnerable to AI. Focus on higher-order cognitive skills.
  • False Positives: Overly sensitive monitoring systems can produce false positives. Carefully calibrate the systems to minimize false alarms while maintaining effective detection of cheating.
  • Lack of Integration: Poor integration of different components can create inconsistencies and inefficiencies in the exam workflow. A well-integrated automated system increases efficiency and simplifies the process.

:speech_balloon: Still running into issues? Share your (sanitized) config files, the exact command you ran, and any other relevant details. The community is here to help! Let us know if you’re trying to use Latenode for this!

yeah, same here! my prof is super strict now, no more online stuff for tests! it’s crazy, i used to love online quizzes, but i get it. some peeps were def cheating with AI tools. wish they’d find a better way tho, handwriting is rough for me too.

I get why schools are doing this, but it’s just a band-aid fix. The real problem isn’t AI cheating - it’s how we test students in the first place. I’ve seen this firsthand. When exams focus on memorization or basic problems, yeah, AI breaks everything. But professors who build tests around critical thinking or applying concepts to weird new scenarios? They don’t have this issue. AI sucks at that stuff, and it actually shows if students understand the material. Going back to pen and paper works for now, but schools would be better off completely redesigning how they test. Instead of blocking tech, why not create exams that measure real learning instead of what students can copy or Google?

The Problem: Your organization is facing challenges with AI-assisted cheating in technical interviews and take-home coding assignments for new graduates. Candidates are using AI tools to generate solutions, making it difficult to accurately assess their skills and understanding. You need more effective methods to evaluate candidates’ abilities without relying solely on take-home assignments.

:thinking: Understanding the “Why” (The Root Cause): The issue stems from the mismatch between the capabilities of AI tools and the intended assessment goals. AI tools excel at generating syntactically correct code, solving straightforward problems, and optimizing for specific outputs. However, they fail to capture the nuances of real-world problem-solving, which involves debugging, adaptability, problem decomposition, and explaining complex concepts in real-time. Take-home assignments, often given in isolation and with ample time for revision, leave considerable room for AI assistance, thus becoming inadequate tools for assessing genuine understanding.

:gear: Step-by-Step Guide:

  1. Transition to Live Coding Sessions with Pair Programming: This is the core solution. Shift from take-home assignments to live, in-person or remote, coding sessions where candidates work on problems in real-time, while collaborating with an interviewer or team member. Pair programming allows for direct observation of problem-solving approaches, including debugging strategies, code organization, and the ability to adapt to unexpected issues. This also reveals the candidates’ ability to discuss and explain their code in real time.

  2. Design Dynamic Coding Challenges: Instead of static problems with a single “correct” answer, create problems that require adaptability and critical thinking. The questions shouldn’t just be about writing functional code; they need to assess a candidate’s ability to understand system design, handle edge cases, consider efficiency, and explain their approach effectively. Include unexpected changes, ambiguous requirements, or follow-up questions requiring thoughtful analysis. This design makes AI assistance significantly less effective.

  3. Incorporate Conceptual Questions and Code Explanations: Supplement the live coding challenges with questions that assess a candidate’s conceptual understanding of algorithms, data structures, and software design principles. Require the candidates to articulate their thought processes, explain their design decisions, and justify the choices made in their code. This direct assessment of understanding makes AI cheating very difficult.

  4. Introduce Unscripted Challenges and Debugging Scenarios: To further deter AI use, introduce unscripted challenges or scenarios requiring real-time debugging. Present edge cases, errors, or unexpected inputs during the coding session. The candidate’s response to these unexpected problems will reveal their problem-solving abilities, not just their ability to generate working code from an AI.

  5. Evaluate Communication and Collaboration Skills: Assess the candidate’s ability to communicate effectively during the coding session, explaining their approach clearly, listening actively, and collaborating efficiently with others. A true understanding of the solution will translate into fluent communication. This evaluation criterion is hard to fake with AI.

:mag: Common Pitfalls & What to Check Next:

  • Insufficiently Complex Problems: If the problems are too simple or straightforward, candidates can still use AI effectively. Design challenging problems that require significant thought and problem-solving skills.

  • Lack of Diversity in Question Types: To get a well-rounded assessment, include different types of questions – algorithmic, design, conceptual, and debugging.

  • Inadequate Observation and Evaluation: Careful observation is crucial. Ensure the interviewers are well-trained to evaluate not just the code but also the candidate’s process and communication skills.

:speech_balloon: Still running into issues? Share your (sanitized) interview questions, the AI platforms you suspect were used, and any other relevant details. The community is here to help!

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.