Exegesis of AI Policies
Exegesis (noun): A detailed explanation or interpretation of a text, especially religious, philosophical, or scholarly writings.
Before we start, I have to share with you that the title of this article came to me quite naturally. It’s supposed to explain in detail my interpretation of official documents ; it is a literal exegesis. After a while, however, I did suspect people may not know what that word meant. So I asked 3 of my teacher colleagues point blank, and they did not know… Unphased, I asked my knowledgeable wife, and she didn’t know either. Instead of discouraging me from using it in my title, it reinforced my desire to share it with you, because I learned about it in one of my favorite books. I doubt anybody will go and actually read it, but just in case you want to learn something deep about me, here it is:

I’ve been dreading this moment for a while but it’s time I tackle AI policies and guidelines: the smelly skeleton in every district’s closet, the 800-pound boring gorilla, the mind-numbing Mammoth in the room… Sure, I’ve broached the topic in the past, but with school districts slowly waking up to the fact that they need clear rules around AI use, especially now that the first lawsuits are landing, I guess the moment is now. And since district administrators probably don't have a bunch of free time to wade through the tsunami of literature being dumped on them by the federal government, state education departments, private companies, even UNESCO among others, I figure I might as well share what I've learned from my long glossy-eyed afternoons fighting the urge to do - literally - anything else.
Fair warning: I'm going to try my best to make this digestible (with a little help from my AI friends), but we need to cover a lot of ground here. To that end I’ll need to depart from a narrative style to employ a rather didactic approach. School and district leaders need to understand that successfully implementing AI usage guidelines isn't just a matter of copying and pasting some policy document - it's a long and complicated project that requires actual guidance. So buckle up, because we're starting with the US Department of Education.
You may have heard of that 71-page document they released in May of 2023, titled “Artificial Intelligence and the Future of Teaching and Learning.” Save yourself the trouble - I read it so you don't have to. Whether you've been living under a rock for the past 2 years or you're actually keeping up with AI developments, these 71 pages will make you reassess your life choices. Let me give you the TLDR before we get to the juicy stuff.
Analysis - The U.S. Department of Education Guidelines
Note: for the rest of the article, if you see the block quote blue bar like just below, it means I’m either directly quoting, “x”, or paraphrasing from the documents I’m talking about.
“Some AI-enabled systems and tools seek to address this potential conflict…”
“AI-enhanced formative assessments may have the potential to save teachers’ time”
“Innovators will be building models that fit available data”
“We argue that AI can bring opportunities to address a wider spectrum of strengths and needs but only if developers and innovators focus on the long tail and not only ‘teaching to the middle’.”
Once you sift through all the speculative assertions in this document like above, you’ll probably be hunting for actual actionable recommendations. You know, something teachers and administrators can actually use and/or exert an influence over? Well, all you’ll find instead is something far too rarely produced by the government, yet hardly helpful here: common sense.
1. Keep Humans in the Loop: teachers should use AI systems as tools to enhance their instruction while maintaining authority over decisions. Educators can select AI systems that allow manual overrides and human discretion.
Example: Prioritize adopting AI tools designed to assist, not replace, instructional roles
My two cents: Thanks, Captain Obvious!
2. Design AI for Equity and Inclusion: Educators can advocate for and select AI tools that are explicitly designed to serve diverse learners, including students with disabilities and English language learners.
Actionable Step: During procurement, prioritize tools that offer adaptive features and culturally responsive content
My two cents: for the culturally responsive part, it’s unclear what they’re talking about since GenAI is a content generator, could they have looped in prior AI adaptive tools like IXL, iReady, and such? As far as accessibility is concerned, at this point and for the foreseeable future, all GenAI tools will be accessed through a web browser, so any assistive technology that was used prior still stands. Ultimately, all the tools will include text to speech and speech to text, and vision more broadly, the first products are already available to consumers (more on that later).
3. Ensure Data Privacy and Ethical Use: District administrators must evaluate AI tools for compliance with data privacy laws like FERPA and state-specific regulations. Teachers should avoid inputting sensitive student data into systems without clear privacy assurances.
Actionable Step: Establish clear policies for how student data is used and ensure AI tools meet district privacy standards
My two cents: EdLaw 2D here in NY, this is straightforward, nothing new either.
4. Focus on Transparency and Explainability: Teachers and administrators can demand that AI vendors provide clear documentation of how their systems function and offer training on interpreting AI outputs.
Actionable Step: Use tools that provide transparency in their algorithms and outputs to ensure they align with instructional goals
My two cents: this is impossible, it simply does not exist. AI labs themselves don’t really understand how LLMs do what they do. Please, go ahead and demand this from OpenAI, Microsoft, or Google, and I will amend this article if you get anything in return.
5. Promote AI Literacy: Teachers can incorporate lessons about AI into their curriculum, helping students understand how AI works and its ethical implications. Administrators can provide professional development for staff on using and understanding AI.
Actionable Step: Include AI literacy as a component of professional learning sessions and student workshops
My two cents: this is a great example of what drives me nuts in this kind of document. It’s like saying “all we need is to come up with a plan.” Who’s defining “AI literacy”? The term “AI literacy” appears 10 times in the document, 6 of which are on page 49, yet nowhere does it say what they mean by it. If we have to teach it, a definition would be a welcome first step, don’t you think? I guess I’m guilty of the same sin of using such terms without defining them. The difference, of course, is that I don’t pontificate about it in a 71-page document I impose on every educational institution in the country. But hey, if there’s a vacuum there maybe I can squeeze in my own definition? That’s a good topic for another article…
6. Strengthen Trust in AI: Districts can build trust by selecting reliable AI tools and fostering a culture of collaboration between educators and developers. Teachers should share feedback about AI tools with administrators to refine their usage.
Actionable Step: Pilot AI tools in small settings and adjust based on teacher and student feedback before full implementation
My two cents: Yes! I mean, I don’t know about collaboration between classroom teachers in the Hudson Valley and AI engineers in the Silicon one, but the sharing of information between stakeholders during experimentation and evaluation is paramount. I am definitely in favor of gaining experience through a smaller Pilot cohort.
“Now is the time to show the respect and value we hold for educators by informing and involving them in every step of the process of designing, developing, testing, improving, adopting, and managing AI-enabled edtech”.
I did like this quote from page 58, yet I wonder… If you are an educator, let me know in the comments if this quote accurately reflects your involvement in the AI revolution?
Before we move on, there’s one thing worth noting: THE ONLY PART of this 71-page opus that seems to resonate with people—myself included during workshops and talks—is a single picture and quote about a laudable goal. I’m serious, every time I attend a webinar, workshop, or conference where this document comes up, that’s the one thing everyone highlights (though I suspect many haven’t actually read the full document). As for how to achieve this goal? Don’t ask the U.S. Department of Education—that part didn’t make it into the report. Still, it’s a compelling vision for the potential role of AI in education.
Synthesis - US State Education Departments Guidelines
At the time of writing, 23 states1 have decided to throw their hats into the AI guidelines ring. New York, hellooo??! We're talking everything from Hawaii's breezy 2-pager to Ohio's War-and-Peace-sized 81 pages, averaging around 23 pages per state. They're about as diverse as you'd expect - different tones, styles, and target audiences. Some are very good, some utterly useless. Washington State actually had the revolutionary idea to speak directly to students (imagine that!), and North Carolina's guidelines, still my favorite, was definitely written by someone who deeply understands the issue at hand.
I'll be honest - while I've perused them all before many times, I didn't exactly jump at the chance to re-read all 22 of them for this article. That’s 462 pages total. Instead, I took advantage of the brand new Gemini 2.0 Flash released a couple weeks ago (free in AI Studio, by the way) and uploaded them all to ask questions. I’m including the full answer HERE along with my prompt, but here’s the overall assessment:

Some of these documents can actually be pretty helpful in understanding what needs to be addressed - that is, if your district is tired of waiting for New York State to issue its own guidelines. That's exactly what Denver School District did ; they didn’t bother to wait for Colorado. Meanwhile, NYC Public Schools, the largest district in the country, is still playing the waiting game.
This brings us to the challenge of compiling a document that is actually helpful to the main stakeholders: teachers and students. To do so, many decisions need to be made. If districts want buy-in from educators, students and families, these stakeholders need to be 1. educated about the opportunities and challenges presented by this new technology, and 2. given a voice to make sure they participate in shaping the district’s, and therefore the community’s, philosophy about AI and its role in education.
That's why I think we're better off starting with Policy 8636, issued last year by the New York State School Board Association. It's a refreshingly brief 3-pager that manages to capture all that “common sense” I was griping about earlier, but in a much more digestible format. Let's do a close reading session, shall we? Feel free to read it yourself, but I'll reproduce most of the text here, adding my comments along the way, highlighting what actually needs to happen once a district adopts this document, and the real work begins.
Exegesis - NYSSBA Sample Policy 8636
“ ARTIFICIAL INTELLIGENCE
NOTE: The use of generative artificial intelligence has increased among the general public, with the release of ChatGPT and other online tools that generate text, images, and videos in response to a user’s prompt. With this new development came opportunities and challenges for school districts. This is an evolving area, and one that will need to be periodically revisited. This sample policy is based on our best recommendations for how to handle in-school and at-home use of generative AI and the impact on school operations. It is important to discuss these issues with your school attorney, Director of Technology, Data Protection Officer, as well as with staff, students, and families. “
This opening note is useful in which it acknowledges that neither the board, nor anyone else, really knows how to address these “opportunities and challenges”, which is why most top-down guidelines are so hopelessly unspecific. Anything specific will be experimental, and stakeholders need to be aware of this fact. They appropriately recommend to include all said stakeholders in the conversation, however they do not detail any specific way that this should happen. All parents, or a few parents? Once, twice? or in an on-going process? During school Board meetings or with an asynchronous way to participate in the conversation to foster diversity, equity, and inclusion? This is an example of the “many decisions” that will need to be made.
“The use of artificial intelligence (AI) has permeated aspects of everyday life, including school district operations, such as email spam filters, navigation apps, search engines, speech recorders, spelling and grammar checkers, and word processing auto-complete suggestions, often embedded into commonly used software. Generative artificial intelligence is a type of AI technology that can quickly generate large amounts of high-quality, convincingly authentic, human-like content, such as language, computer code, data analysis, images, video, and audio, in response to a prompt, based on data that it was trained on.”
You’ve all seen chatbots, but just so we’re clear on the state of AI as of December 2024, here’s what it can do for free now:
“The widespread availability and use of generative artificial intelligence (GenAI) presents both challenges and opportunities for the district. Care must be taken to address and mitigate the challenges, and maximize the opportunities, to improve student learning and district operations.”
“Care must be taken” suggests that if your district were to adopt this document, you would then have a mandate to “address and mitigate the challenges” that are not defined in this document, to “maximize the opportunities” that aren’t defined either, and to “improve student learning” through your integration of GenAI. How? This is the million dollar question that can only be answered through a collective effort of ideation, experimentation, data analysis, and sharing with other communities on what works and what doesn’t. This process needs to be designed, tested, and refined over time while keeping up with the constant changes in the technology. Ulster BOCES will be launching an initiative soon, aiming to kickstart collaboration between teachers across the county, more on that in the conclusion.
“ Acknowledgements
The district acknowledges that many students are able to access GenAI outside of school, and may be able to use GenAI to complete school assignments. However, not all students are able or willing to do so, and should not be penalized for not using GenAI.”
Yes! I sincerely commend the NYSSBA for including this sentence. They're actually acknowledging the digital divide elephant in the room. Think about it: some kids will have premium subscriptions to cutting-edge AI tools, others will have tech-savvy parents showing them the latest Gemini 2.0 releases and such, and then there are students who might as well be living in 2022 for all they know about AI. Oh, and let's not forget those who still can't get decent internet at home. So how exactly do we “not penalize” these students? Here's a wild thought - maybe we should actually provide AI access and education to level the playing field. But that opens another can of worms: which tools do we provide? How do we train staff and students? How do we keep everyone's AI literacy up to date with a technology that moves at warp speed? And the question every administrator loves: what's the budget for all this?
“The district further acknowledges that the tools to detect the use of GenAI accurately, consistently and fairly may not be available, may quickly become obsolete, or may be biased against English Language Learners.”
Kudos to NYSSBA! Many state guidelines don't even touch this hot potato. I've beaten this dead horse before, but it bears repeating - everyone needs to be on the same page about this. And my page clearly says “don't use these tools at all.”
“The district also acknowledges that the data used to train GenAI models is not usually made public, may be biased, and may violate copyright laws. The responses generated by GenAI may be biased, wrong, or violate copyright laws.”
Add this to your AI literacy 101 curriculum, folks. But again - districts need to figure out how to actually teach this.
The document then goes on to mention that this policy complements existing policies on acceptable computer use and academic honesty. But the real meat and potatoes are the four guidelines we're about to dig into. First, though, let's wrap up the housekeeping stuff.
The rest is mostly EdLaw 2d compliance mumbo-jumbo, but here's the thing - if you've already agreed that we need to provide AI access to all students (and you should), this becomes pretty straightforward. SchoolAI, MagicSchool, and Chat for Schools have all signed the EdLaw 2D rider, and more will follow. So once you pick your poison, the compliance part is basically done. The document ends with some intellectual property considerations that are about as exciting as watching paint dry, so let's move on to the good stuff.
The Final Exegesis: NYSSBA's Four Commandments
“1. The Board supports including the principles of responsible and effective use of GenAI as it relates to the curriculum as well as life outside of or beyond school.”
Translation: "We know AI is spreading faster than germs in daycare, so let's try to be 'responsible and effective' - though we won't tell you what we think that means.”
“2. Students are responsible for their own work, and any errors it may contain, and must cite the sources they use as required by the classroom teacher.”
What else is new? Whether or not AI can be used as a “source” is also unclear here, this will be another decision made by [fill in the blank].
“3. The Board respects the professional capacity of the instructional staff to assign work that is less susceptible to student use of GenAI to circumvent learning, and allow for multiple methods for students to demonstrate competence and understanding.”
Oh, this one's my absolute favorite! It's basically “teachers will figure it out” in fancy policy-speak. Does it remind anyone else of 2020? You know, when we were told overnight to teach “remotely” while nobody actually knew what that meant? We really had to build the plane while flying it, but his time it will be a rocket, please & thanks! It eerily echoes the quote from the US dept of Ed I mentioned earlier “now is the time to show respect to the educators”... who will do all the work.
And let's talk about this magical “work that is less susceptible to student use of GenAI.” That frontier is being pushed back constantly! Nearly everything we've traditionally asked students to do can now be outsourced to AI. I can literally create an avatar of myself speaking any language in my own voice and likeness, saying whatever I want. Sora's out there turning text into videos, Genie 2 can turn a jpeg into a playable 3D world for Pete’s sake! The “multiple methods” bit is the only lifeline here, hinting at a fundamental shift in how we need to think about assessment.
“4. Instructional staff must be clear about their expectations for student use of GenAI in assignments. Staff who suspect a student has not done an assignment on their own can request that the student demonstrate their knowledge of the material in other ways, to the same extent they already do.”
Sure, teachers need to communicate clear expectations - but shouldn't the state/district/school provide some kind of framework about when and how students can use AI? Personally, I'm a fan of the AI stoplight framework, but there are others worth exploring.
That last sentence is actually pretty smart - it addresses what to do when you suspect AI shenanigans, which surveys show most teachers haven't been trained for. The Board seems to prefer a non-confrontational approach (smart move), suggesting teachers just ask students to show their knowledge in a different way. I wrote about this in my AI detection software article, but let me take it a step further: if I were running a district, I'd start changing how work is submitted right now, starting with K-6. Since written work is about as trustworthy as an inflatable dartboard - unless it's done in front of us, we need to get students used to discussing their work through other means - sometimes, or all the times, that's another decision to make. Sure, 2024's teenagers might grumble about this change, or even flip a finger at you, but if we start with the young’uns, by the time they hit secondary school, it'll just be business as usual. Of course, this means getting elementary teachers on board, even though this whole AI circus arguably affects them less than their secondary colleagues.
Conclusion
We’re almost there folks! Don’t leave now, I’m about to lay it all out.
The implementation of AI policies and guidelines in education isn't just another administrative task - it's a fundamental shift in how we approach teaching and learning. For a district to successfully navigate this transition, four essential questions must be addressed:
How will your district create meaningful engagement between educators, administrators, students, and families to develop AI guidelines that reflect your community's values and needs?
How will you build shared understanding among all stakeholders about AI's concrete capabilities, limitations, and implications for teaching and learning?
How will your district translate abstract guidance into specific, actionable policies that define acceptable and innovative uses of AI?
What systems and supports will you put in place to bridge policy implementation with day-to-day classroom realities?
None of these questions has a simple answer, and no district will get it perfectly right on the first try. What matters is starting the process with intention and humility, acknowledging that this is uncharted territory for everyone involved. That's why my team and I at Ulster BOCES are launching two complementary initiatives. The first provides districts with structured guidance through questions 1-3, ensuring they develop robust, well-considered policies. The second, which we call the AI Circle, creates a collaborative space where educators can experiment with AI integration and collectively define what effective implementation looks like in real classrooms. Together, we'll explore AI tools, test different approaches, and document what works (and what doesn't). Through this shared learning experience, we'll build practical knowledge about AI's role in education from the ground up. If you're ready to help shape how AI transforms education in our region, reach out.
The future of education isn't being written in policy documents or tech company boardrooms - it's being created in classrooms by communities making thoughtful choices about AI integration. The question isn't whether to embrace AI - that ship has sailed. The question is whether we'll approach this transformation with the careful analysis and intentional design it deserves.
update 1/6/25 - I just found out Louisiana issued their own last fall, the link is not included in the AiForEducation webpage