Everything you need to set up AI-powered writing placement and program assessment at your institution.
MyWritingAssessment serves two complementary purposes: Writing Placement (placing incoming students into appropriate writing courses) and Program Assessment (measuring learning outcomes for accreditation). You can use one or both.
Create a Placement Configuration with levels and rubric, add writing prompts for students, create an assessment and generate access codes, students complete timed writing in a proctored environment, then review AI recommendations and finalize placements.
Define learning outcomes aligned to accreditation standards, create outcome-based rubrics, set up cohorts for your programs, collect or upload student essays, and generate outcome reports for accreditation.
MyWritingAssessment uses role-based access control with three roles: System Administrator with full access to create and manage assessments, configurations, learning outcomes, rubrics, and analytics. Faculty Reviewer with access to review student submissions, view AI recommendations, and override placement decisions. Proctor with access to view assessments and monitor active placement sessions.
Configure the course levels students can be placed into based on their writing assessment. Common examples include Developmental Writing, ENG 101 College Composition I, and ENG 102 College Composition II. Each level needs a name, description, display order, and active status.
Define the criteria the AI uses to evaluate student writing and determine placement. Common rubric criteria include Thesis and Argument, Organization, Evidence and Support, Grammar and Mechanics, and Style and Voice. For each criterion, provide a name, key, description, and weight.
Create the essay topics students respond to during placement assessments. Best practices: choose topics that don't require specialized knowledge, ask students to take a position, be clear about expectations, avoid emotionally triggering topics, and consider multiple prompts with random assignment to reduce sharing.
Train the AI to match your institution's specific placement standards by providing example essays. Upload 2-3 sample essays for each placement level. For each sample, explain why it belongs at that level. Mark your best example as Primary. The AI uses these to calibrate its recommendations. Calibration Strength ranges from 1/5 Minimal to 5/5 Excellent based on how many samples you provide per level.
Combine your configuration and prompts into a deployable assessment. Settings include Name, Configuration, Prompts, Time Limit, and Status (Draft, Published, or Closed).
Manage secure access through session codes. Enable the session code, set an expiration time, share with students in the testing environment, students enter the code to begin. Proctored session flow: students enter ID and name, accept Academic Integrity Agreement, see the writing prompt, write in a distraction-free interface with behavioral monitoring (typing patterns, paste attempts, focus changes), and submit for AI analysis.
After submission, AI generates placement recommendations. For each submission, reviewers see the complete essay, AI-recommended level with confidence score, detailed rubric evaluation per criterion, integrity flags, and behavioral data. Confidence: High (80%+) minimal review needed, Medium (60-79%) benefits from review, Low (below 60%) faculty review recommended. Faculty can Accept or Override with explanation. All overrides tracked for audit.
Behavioral metrics: Paste Attempts (text pasted vs typed), Tab Switches (navigating away, 5+ suspicious), Correction Ratio (very low below 3% may indicate pre-written content), Thinking Pauses (zero pauses with substantial typing is suspicious). Flags provide information for faculty decisions, they don't automatically invalidate submissions.
Define outcomes you want to measure. For each: Code (e.g. WC1, CT2), Name, Description, and optional Category. Consider mapping to SACSCOC, HLC, AACSB, ABET, or regional standards.
Program rubrics measure achievement on learning outcomes (vs placement rubrics which sort students into levels). Each rubric has Criteria (mapped to outcomes), Performance Levels (typically 4: Exemplary, Proficient, Developing, Beginning), Descriptors, and Benchmarks (minimum level for meeting outcome).
Group students for analysis. For each cohort: Name, Program, Term/Year, and optional Level. Strategies: by Program and Year, by Course, Entry vs Exit comparison, or by Accreditation Cycle.
Upload existing work (.docx, .doc, .txt, .pdf) or collect through the platform via submission links. Assign to cohorts during or after upload.
Achievement Summary (% meeting each outcome), Score Distribution, Trend Analysis across cohorts, and Criterion Breakdown. Use to identify struggling outcomes, compare entry vs exit cohorts, and track curriculum change impact.
Templates: Standard Accreditation, Executive Summary, Detailed Analysis, or Custom. Contents: executive summary, outcomes, methodology, results with visualizations, benchmarks and trends, optional anonymized sample essays. Export as PDF or Word.
Placement analytics: distribution across levels, completion rates, average scores by criterion, integrity flag rates, faculty override rates. Program analytics: outcome achievement rates, longitudinal trends, criterion analysis. Export data as CSV for SIS systems or custom analysis.
User management: add users by email, assign roles, send invitations. Users deactivated rather than deleted to preserve audit trails. Three roles with granular permissions.
Placement vs program assessment difference, using both features, AI analysis timing (30-60 seconds typical), session handling, minimum sample sizes (30+ recommended), uploading existing essays, student privacy for accreditation, data security, and LMS integration roadmap.
Common troubleshooting: access code issues, AI analysis delays, reviewer permissions, essay analysis requirements, and export completeness. Contact support@mywritingassessment.com for help.