Building a Data Management System and AI
Assistant for Quality Assurance at Tay Nguyen University
1. Context
and Problem statement
Tay Nguyen University (TNU) is a
MOET-affiliated public university with a clear governance model (Party
Committee, University Council, Board of Rectors), 9 functional offices, 9
academic units, 2 training-support units, and 3 practice/teaching facilities.
As of June 2025, TNU had 635 staff members (of whom 349 female and 286 male; 30
are from ethnic minority groups), of whom more than 60% are academic staff (01
Professors, 16 Associate Professors, 102 PhDs, 04 Specialist Level II, 06
Specialist Level I physicians, 258 Masters, 69 Bachelor’s.
Tay Nguyen University (TNU) is striving to
enhance its educational quality and achieve rigorous accreditation targets for
2020–2030. The university has established a dedicated Quality Assurance Office
with 10 staff members to lead internal quality assurance (QA) efforts. TNU has
been actively pursuing both institutional and program-level self-assessments,
and in 2022 it became an Associate Member of the ASEAN University Network –
Quality Assurance (AUN-QA). By late 2024, two academic programs (Economics and
English) had already been accredited under AUN-QA standards, marking initial
successes in external quality verification. The strategic goal is that by 2030,
100% of TNU’s programs achieve domestic accreditation and at least 20% attain
international accreditation, aligning with national higher education quality
benchmarks. This push comes amid strong directives from the government and
university leadership to accelerate digital transformation and apply artificial
intelligence (AI) in governance and quality management. In March 2025, the
Prime Minister approved a plan for TNU (and a peer institution) to become
regional centers of excellence, which explicitly calls for “boosting digital
transformation and AI applications in teaching and university management”.
Despite this supportive policy
environment, TNU’s internal QA system faces significant challenges.
Quality-related data are currently siloed across multiple platforms and formats
– for example, some data reside in the national Higher Education Management
Information System (HEMIS) and the MOET’s SAHEP project system, while many
accreditation evidences are stored in disparate databases or even in manual
files. There is no centralized QA data repository to enable efficient access
and analysis. (Notably, the Ministry of Education and Training has deployed
HEMIS nationally to collect data from all universities, but at the
institutional level TNU lacks an integrated database for QA evidence, causing
fragmentation.) Self-assessment reports – required for accreditation – are
still prepared largely by hand. QA staff must gather evidence and statistics
from various sources and manually draft narratives for each criterion, which is
labor-intensive and time consuming. This manual process often leads to
duplicated information across reports and may fail to fully leverage the
abundant evidence available. Although the university has an existing software
system for managing accreditation evidences and aiding report writing, it
currently has minimal AI capabilities. In practice, report writing teams
receive little intelligent support from the system beyond basic document
storage, and thus spend upwards of 3 months assembling a self-evaluation for
one program. This is inefficient and risks inconsistencies or omissions in the
final report.
Furthermore, stakeholder feedback analysis
is underdeveloped. TNU’s QA office conducts regular surveys to collect opinions
from students and other stakeholders on teaching quality, courses, facilities,
etc. (for instance, end-of-semester teaching evaluations, graduate satisfaction
surveys, etc., are administered each year). However, analyzing these feedback
data is done manually and mostly limited to basic statistics (e.g. average
ratings, simple charts). Such traditional analysis is often slow and superficial
– it may miss deeper insights hidden in open-ended comments or trends across
semesters. Studies note that manual feedback analysis is not only
time-consuming but can be subjective and fails to promptly yield detailed
actionable information from large volumes of responses. In TNU’s case, open
text responses from students (which could contain sentiments and recurring
issues) are not systematically analyzed for sentiment or themes, meaning
potential quality problems or improvement opportunities might remain latent.
Most critically, the university lacks an
early warning system for student performance risks. There is currently no
data-driven mechanism to proactively identify and support students who are
struggling academically or at risk of dropping out. Interventions usually rely
on lagging indicators (like end-of-semester GPA or eventual academic
probation), by which time it may be too late to effectively retain the student.
Research shows that many universities’ warning and counseling efforts come “too
late” – typically after final grades or when a student has already initiated
withdrawal – and thus have limited effect on reducing attrition. Proactive
models using machine learning can predict student dropout risk much earlier by
analyzing various indicators (grades, attendance, library use, advising
sessions, etc.), enabling timely support and reducing the dropout rate. TNU
currently has no such predictive analytics in place; any support to at-risk
students is reactive. This gap is especially pressing as national standards now
consider a high dropout rate (e.g. >10%) as a serious quality issue for a
university.
In summary, TNU’s internal QA system is
not yet keeping pace with its growth and the increasing demands of
accreditation and digital governance. Data scattered in multiple places,
labor-intensive report writing, rudimentary feedback analysis, and lack of
early warning tools all hinder the university’s ability to assure and enhance
quality efficiently. These limitations persist even as TNU faces pressure to
innovate: the government and university leaders expect robust digital QA
systems and AI-driven management as part of the digital transformation agenda.
This contrast between external expectations and internal capability constitutes
the core problem. The urgent need is to integrate AI technologies into the QA
system to manage data more effectively, automate analysis and reporting, and
ultimately improve educational quality in a sustainable way.
Key challenges summary: TNU’s IQA system lags behind accreditation and digital governance
demands. Data dispersion, labour‑intensive reporting, rudimentary feedback
analytics, and lack of predictive support hinder QA effectiveness. Meanwhile,
policy expectations require robust, AI‑enabled QA. The urgent need is to
integrate AI into QA to manage data, automate analysis/reporting, and improve
quality sustainably.
2.
Project Objectives
Overall Goal: The project aims to
significantly improve the effectiveness and automation of TNU’s internal
quality assurance system through AI applications. By leveraging AI for data
integration, analysis, and report generation, the university will streamline QA
processes (data collection, self-assessment reporting, feedback analysis) and
better meet its accreditation and digital transformation targets. This
contributes to sustaining high educational quality and supporting student
success, in line with TNU’s strategic plan and accreditation commitments.
Specific Objectives and Key
Performance Indicators (KPIs):
•
Centralized QA Data Management: Develop a unified,
central database that consolidates all key quality assurance data – including
accreditation evidences, survey results, student learning data from HEMIS,
outcomes (graduation rates, employment), etc. KPIs: By December 2025, have an
integrated QA data repository operational, containing at least 80% of the
important QA indicators and evidences needed for self-assessment. Success will
be measured by data coverage and reduction in time spent by staff to retrieve
information. Rationale: A unified database addresses the current fragmentation
– when all QA evidence is in one place, it becomes feasible to apply analytics
and AI.
•
AI Assistant for Self-Assessment Reporting: Implement
an AI-powered module (using Natural Language Processing) to support the writing
of accreditation self-evaluation reports. This “QA AI Assistant” will suggest
content, summarize evidence, and draft sections aligned to accreditation
standards. KPIs: Reduce the time to prepare a self-assessment report by 30–50%
(≈3 months → ~1.5 months). By 2026, AI-generated drafts/suggestions used for
≥50% of criteria. Rationale: Automating retrieval and narrative drafts improves
coherence; recent QA forums report significant time savings.
•
Enhanced Feedback Analysis and Early Warning: Utilize
AI to analyze stakeholder feedback (sentiment, themes) and pilot a predictive
model to flag at-risk students. KPIs: From 2026 (after April 2026, mainly
for Phase 2), produce ≥2 AI-driven feedback analysis reports/year;
achieve ≥70% correct identification of at-risk students in tests. Rationale:
NLP unlocks open-text insights; ML early-warning reduces attrition.
• Build Digital QA Capacity: Train QA
staff and relevant faculty to use the system and embed new AI-enabled QA
practices. KPIs: Train 100% of QA Department staff and ≥30 additional staff by
2026; develop ≥2 official AI-in-QA workflows (AI-assisted reporting; advisor
protocol for alerts).
Change priorities:
(1) Curriculum relevance
via data-driven QA (by Jan 2026).
(2) Industry partnerships
via Centre for Innovation and Entrepreneurship/Advisory Committees (1–2
mechanisms by Dec 2025).
(3) Regional/international
integration & benchmarking (beyond 2026).
3.
Scope of Implementation
This project will be implemented
university-wide, with an initial focus on key pilot areas to ensure
effectiveness before scaling up. Pilot Units: one priority program preparing
for AUN-QA accreditation (AI-assisted report writing) and the QA office for institutional
self-assessment and dashboards. Data Coverage: integrate data from Training,
Student Affairs, QA, faculties, and HEMIS/ministerial systems to build a
multidimensional dataset. Stakeholders and Beneficiaries: QA staff and program
teams (primary users), leadership (dashboards for decisions), advisors/mentors
(early alerts); students benefit indirectly. Scaling Plan: after pilot success,
expand to all faculties/programs with modular architecture, phased onboarding,
and continuous model/data updates.
4.
Main Activities and Timeline (P-D-C-A)
Phase 1 – Plan and Do (November 2025
– January 2026)
–
Detailed Needs Assessment and System Design (November 2025 – January 2026): Map QA data sources; interview stakeholders; define requirements;
assess build vs. extend options; draft database schema and AI assistant
functions (evidence summary, draft per criterion); plan feedback analytics and
early-warning models.
–
System Development and AI Module Training (January 2026 – March 2026): Implement
unified QA database and hosting; migrate initial data; fine-tune NLP assistant
on past reports/evidence; implement sentiment pipeline; prototype risk model
(e.g., logistic/tree-based) per research; design QA dashboard UI.
–
Pilot Deployment and Refinement (March and April, 2026): Use system in a real
self-assessment (Run one live SAR with AI); run AI-based survey
analysis; test early-warning on recent cohorts; gather metrics vs KPIs; iterate
models/UI; train power users.
Phase 2 – Check and Act (May –
December 2026 and beyond)
–
Evaluation of Pilot Outcomes (May – September 2026): Measure KPIs (data coverage,
time reduction, AI usage, prediction accuracy, user satisfaction); external QA
review of report quality; document lessons and adjustments.
–
Scaling Up and Institutionalization (Post-2026 – Act): Phased rollout to all programs; policies mandating system use;
assign QA/IT ownership; role-based access, privacy and backups; continuous
model retraining and tech upgrades.
Monitoring
and
completion :
·
KPI set: Increase the number of surveys and the associated
response rates; conduct at least one Focus Group Discussion or seminar for each
program; workshops & participation +20%; ≥1 advisory
committee/faculty; 40 programs revised; MoUs 50→100/3y;
international students 30→90–100/3y; mobility 10–20/AY;
employability 80%→90%; ranking 73→60/3y; enrolment +10%
(2,200→≥2,500/3y); improved entry quality in ≥10 majors.
·
Completion: KPIs met/near-met; stakeholder satisfaction
evidenced; internal/external benchmarking confirms progress.
5.
Expected Results/Outcomes
1.
Integrated QA Data and Analytics Platform: A centralized database with
real-time dashboards (graduation rates, employment, student satisfaction,
etc.), improving transparency, reducing duplication, and enabling leadership to
access a “QA health snapshot” anytime.
2. AI
Assistant for Self-Assessment Reports: An NLP-powered module to draft
accreditation report sections, link evidences, and suggest
strengths/weaknesses. Expected to cut report preparation time by ~50% and
improve completeness and consistency, leading to stronger accreditation
outcomes.
3.
AI-Enhanced Feedback Analysis and Early Warning: At least two AI-generated
feedback analysis reports per year, revealing sentiment and themes from student
comments. A predictive model will flag ≥70% of at-risk students, enabling
proactive advising and reduced dropout rates.
4.
Improved QA Efficiency and Culture: QA processes become faster and more
data-driven; staff shift from clerical tasks to improvement actions. Dashboards
and alerts reinforce continuous improvement and shift mindsets from compliance
to development.
5.
Strengthened Digital and AI Skills: ≥40 QA staff and faculty trained in
AI-enabled QA practices. New workflows institutionalize AI use, leaving a
sustainable legacy of a tech-savvy QA workforce.
6. Risk
Management
Implementing an AI-based QA system is
ambitious, with several risks and barriers. Mitigation strategies are as
follows:
1. Technical expertise and data quality:
TNU has limited AI/data expertise and fragmented QA data. Mitigation: Engage
external experts and mentors, train QA staff through workshops, and adopt a
phased integration—prioritizing key data first, while cleaning and
standardizing over time.
2. Financial and infrastructure
constraints: Developing AI tools requires funding and computing resources.
Mitigation: Maximize existing infrastructure, use open-source frameworks and
affordable cloud services, and seek additional budget support or external
resources (e.g. InnoAIQA toolkit). Start with scalable, low-cost models and
expand gradually.
3. Human/Staff resistance to change: Staff
may be skeptical of AI or reluctant to change workflows. Mitigation: Ensure
leadership mandate, demonstrate quick wins (e.g. AI-generated drafts saving
time), use early adopters as role models, provide hands-on training, and emphasize
AI as a support tool. Build trust by validating AI outputs during pilots.
4. Sustainability and maintenance: Risk of
system disuse without clear ownership; security and privacy also critical.
Mitigation: Assign QA office as system owner, IT Center for technical support.
Establish maintenance schedules, train multiple staff, integrate use of the
system into official QA procedures, and enforce strict data security
(role-based access, anonymization, backups). Continuous user feedback and
updates will keep the system relevant.
7.
Support Requested from the InnoAIQA Program
Technical
consultation and AI expertise. Assign experts in Vietnamese NLP and education analytics to advise our
report-writing assistant and student-risk model (e.g., scoping, data
preparation, algorithm/feature selection, evaluation). Provide periodic
clinics/code reviews, reference case studies, and any InnoAIQA
toolkits/libraries.
Mentoring
and peer learning. Pair us with a project mentor experienced in HE QA/data systems for
milestone reviews and risk management. Facilitate peer exchanges (virtual or
onsite) with institutions running digital QA dashboards, AI-enabled feedback
analysis, or early-warning systems.
Tools,
templates and resources. Share ready-to-use dashboard and survey templates; data-governance
guides; sample QA data schemas; sentiment/NLP starter code. Where possible,
offer time-limited credits or licenses to evaluate cloud AI services (e.g.,
Azure/Google NLP) to reduce prototyping costs.
Networking
and future collaboration. Connect us with International QA and national QA networks for
benchmarking and good practices. Help disseminate project outputs
(workshops/conferences), and provide endorsements that strengthen bids for
follow-on funding and partnerships.

0 Nhận xét