Beautiful, Elegant, and Completely Wrong: Your AI Can Code But It Can't Think — A Survival Guide from Automators Anonymous

Summary

Your AI assistant can generate a working prototype in minutes — but trusting it without testing is a recipe for disaster. These 10 practical tips for writing code with generative AI come from Automators Anonymous at the University of Michigan, where members build clinical, research, and operational tools using Power Automate, Power BI, Power Apps, JavaScript, SQL, SharePoint, and AI. From prompt engineering to data validation, this is the survival guide we wish we had when we started.

Body

Author(s): Gabriel Mongefranco; Automators Anonymous DOI: https://doi.org/10.7302/28866

Summary

Imagine this: you have a great idea for solving a big problem, and you realize your AI assistant can generate a working prototype in minutes — but should you trust it without testing, or is that a recipe for disaster? These 10 practical tips for writing code with generative AI come from Automators Anonymous at the University of Michigan and Michigan Medicine, where members build clinical, research, and operational tools using JavaScript, Power Platform, SharePoint, SQL, and AI. From prompt engineering to data validation, this is the survival guide we wish we had when we started.

 

Introduction

AI-generated code can be beautiful. Elegant, even. Clean variable names, thoughtful structure, helpful comments. It can also be completely, confidently wrong — and it will never tell you so.

Generative AI tools can dramatically accelerate software development, but they cannot think. They do not understand your business rules, your data quirks, or the nurse who will click that one button you never considered. They pattern-match at superhuman speed and produce code that looks like it was written by someone who understood the problem. That is exactly what makes AI-generated code dangerous when left unexamined: it looks right, even if it's wrong.

Screenshot of DataLaVista, an open source dashboard designer designed for easy of use and constraints environments.This survival guide provides practical, field-tested tips for writing code with AI assistants — whether you are a seasoned developer or new to coding. These tips emerged from developing cutting-edge solutions to real clinical, research, and business problems at Michigan Medicine and the Eisenberg Family Depression Center, with contributions from members of the Automators Anonymous community of practice.

Many of these lessons were learned firsthand while building tools like DataLaVista, an open source dashboard designer and embeddable viewer now available for beta testing, as well as a QGenda integration featuring JavaScript-based front-ends, dashboards, and advanced retirement and scheduling modeling. Both DataLaVista and the QGenda tools run inside single-file scripts embedded into SharePoint pages — projects that pushed the limits of what AI-assisted development can accomplish in constrained environments.

 

TL;DR: Iterate, be specific in your prompts, describe usage scenarios in detail, always validate AI output, and leverage the right tools for the job.

 

Background

Automators Anonymous

Automators Anonymous is a community of practice made up of members from different areas across campus who meet regularly to learn from each other and innovate on ambitious integration and automation projects. Members work with Power Platform (Power Automate, Power BI, Power Apps, Power Query), Tableau, JavaScript, SQL, SharePoint, and AI technologies to solve real-world problems in clinical operations, research, and administration. The tips in this article reflect the collective experience of this group tackling complex, cross-functional challenges.

DataLaVista

DataLaVista is an open source dashboard designer and embeddable viewer developed by Gabriel Mongefranco at the Eisenberg Family Depression Center, and Jeremy Gluskin and Shelley Boa at Michigan Medicine. It runs as a single-file script embedded in a SharePoint page, allowing teams to build and share interactive dashboards without standing up dedicated infrastructure. Building DataLaVista with AI assistance surfaced many of the lessons documented here — particularly around modularity, iteration, and the importance of validating AI-generated front-end code.

QGenda Integration and Clinical Workforce Modeling

A JavaScript-based integration with QGenda, an enterprise clinical scheduling software, was developed by Shelley Boa to provide clinical operations teams with dashboards and advanced modeling for retirement forecasting and schedule optimization. Like DataLaVista, the entire solution runs inside a single-file script on a SharePoint page. The complexity of parsing scheduling data, modeling workforce scenarios, and rendering interactive visualizations — all within the constraints of a SharePoint-embedded script — provided invaluable lessons in how to effectively partner with AI on non-trivial development work.

 

The Top 10 Tips for Coding with AI

1. Always Get a Second Opinion

Just like your doctor's diagnosis or your home appraisal, always get a second opinion. Ask multiple AI models to solve the same problem, then compare their responses. Combine the best solutions from each, and feed the merged result back for criticism and completeness. This cross-validation approach helps catch blind spots that any single model might have. Remember — AI can code, but it cannot think critically about its own output. You need multiple perspectives to compensate.

Example: You are building a Power Automate flow that routes IRB amendment notifications to the correct study coordinators based on protocol number and site. You ask one AI to generate the flow logic and a second AI to review it. The first AI correctly parses the protocol number but assumes a single coordinator per study. The second AI catches this and suggests a lookup table to handle multi-site studies with multiple coordinators. Combining both approaches gives you a more robust solution.


2. Do Use AI to Build Prototypes

AI cannot think, but it can build — fast. It excels at generating working prototypes quickly. Use this to your advantage — dream big and push the boundaries from the start. Get a functional draft in front of stakeholders early and iterate from there, rather than spending days on boilerplate code. The prototype will be beautiful. It will also need work. That is the point.

Example: A clinical operations team needs a dashboard showing provider scheduling gaps and projected retirement dates for workforce planning. Rather than spending weeks mocking up designs, you use AI to generate a working JavaScript prototype with sample data in a single afternoon. You embed it in a SharePoint page, share the link with the clinical operations director, and get feedback the next day — weeks ahead of a traditional development timeline.


3. Always Ask for Small Changes

Write your software, data pipeline, or query in a modular fashion. Instead of asking AI to build an entire application in one shot, break your request into small, well-defined tasks. This makes it easier to test individual components, isolate bugs, and iterate on specific pieces without disrupting the whole project. The bigger the ask, the more places the AI can go beautifully, absolutely wrong.

Example: You are building a data pipeline that pulls wearable device data from the Garmin API, cleans it, calculates daily sleep and activity summaries, combines it with demographics from REDCap, and loads the results into a reporting table for a behavioral research study. Instead of asking AI to generate the entire pipeline at once, you start with just the Garmin API call, then the data cleaning step, then the REDCap API call, then the summary calculations, and finally the database load. When the sleep calculation comes back wrong, you only need to fix that one module — not untangle an entire monolithic script.


4. Always Code in Plain English (or Spanish, or Any Natural Language)

Generative AI tools are, at their core, large language models — what they do best is process language. If you can write out exactly what your tool, app, or code should do, how users would interact with it, and any edge cases to handle, all in plain English (or Spanish, or your preferred language), the AI will simply need to translate your prompt into code. This approach consistently yields better results than vague or overly technical prompts. You are doing the thinking; the AI is doing the translating. That is the right division of labor. This is also aligns with the advice we received from the U-M Innovation Partnerships office: that you are the author of any and all code written, even with AI tools.

Example: Instead of asking AI to "write a JavaScript filter function," you write: "The user sees a list of all active clinical trials in a table. Above the table is a search box. When the user types in the search box, the table should filter in real time to show only rows where the study title, PI name, or protocol number contains the search text. If the search box is empty, show all rows. The filtering should be case-insensitive and should not require the user to press Enter." This plain-language description gives the AI everything it needs to produce accurate, usable code on the first try.


5. Always Think Through User Interaction

Before prompting the AI, think carefully about how end users will interact with your tool. Consider the full user journey: inputs, outputs, error states, and edge cases. The AI will not anticipate the user who clicks "Submit" with an empty form, or the admin who opens the tool on a Monday morning before the weekend data load has finished. The more clearly you can describe the user experience, the more useful the generated code will be. This is where human thinking is irreplaceable.

Example: You are building a self-service tool for department administrators to check the status of HR onboarding tasks for new hires. Before you start prompting, you think through the experience: the admin logs in, sees a list of their pending new hires, clicks on a name, and sees a checklist of completed and outstanding tasks. What happens if there are no pending new hires? What if a task is overdue? What if the data source is temporarily unavailable? Documenting these scenarios before you start coding — and including them in your prompt — saves multiple rounds of back-and-forth with the AI later.


6. Always Be As Detailed As Possible

AI's assumptions may be way off or not in line with your request. Provide context, constraints, and boundaries in your prompt. Specify technologies, libraries, data formats, performance requirements, and anything else that matters. The less the AI has to guess, the better your results. When you leave gaps, the AI fills them with confident, elegant guesses — and those guesses are often wrong.

Example: You need a Tableau calculated field that flags patients in a research dataset whose A1C values have been above 9.0 for three or more consecutive quarterly readings. Instead of asking "write a Tableau formula to find patients with high A1C," you specify: "Write a Tableau calculated field using LOD expressions. The data source is a SQL Server table called participant_labs with columns participant_id, lab_date, lab_type, and lab_value. I only need rows where lab_type = 'A1C'. A patient should be flagged if their three most recent A1C values (by lab_date) are all above 9.0. The field should return 'Flag' or 'No Flag' as a string." The AI now has enough detail to produce something you can actually use.


7. Never Get Lazy

Don't blindly use whatever AI creates for you. Test it, then test it again and again. You will likely get a great prototype from the start, but it will require many iterations of prompts and fixes to make a tool that people can actually trust. Code review and testing are still your responsibility. The code will look beautiful. It may even run. But "runs without errors" and "produces correct results" are two very different things.

Example: An AI generates a Power BI DAX measure to calculate average days from patient referral to first appointment. The formula looks clean and returns plausible numbers. But when you test against a handful of known cases, you discover it is including cancelled appointments in the calculation, inflating the average. You also find it breaks when a patient has a referral but no appointment yet. Without manual testing against real data, these errors would have gone into a report used by clinical leadership to evaluate access performance.


8. Don't Give Up, But Know When to Pivot

Expect to iterate — a lot. If the AI is stuck in a loop or heading in the wrong direction, don't be afraid to tell it to try a completely different approach. Sometimes a fresh prompt or a different framing of the problem is all it takes. Persistence and flexibility go hand in hand. The AI is not always being stubborn — it genuinely cannot recognize when it is going in circles. That awareness has to come from you. Keep in mind there are times when the AI is persistent in going a specific direction, and you must use firm language to call it out.

Example: You are asking AI to generate a single-file JavaScript application that renders an interactive Gantt chart for tracking research study milestones, embedded in a SharePoint page. After several rounds, the AI keeps producing solutions that depend on external NPM packages you cannot install in the SharePoint environment. Instead of continuing to patch the same approach, you tell the AI: "The NPM approach won't work. I need everything in a single HTML file with no external dependencies. Use only vanilla JavaScript and inline SVG to draw the Gantt bars. Stop bringing in toolsets I did not ask for." The AI pivots and produces a working solution within two more iterations. This kind of redirect was a frequent occurrence during DataLaVista development.


9. Isolate and Validate Your Data Early

Identify and validate the data you will need as early as possible in the process. This helps ensure the AI is on the right track and saves significant effort as you iterate. Catching data issues early prevents cascading problems later. AI will happily build an elegant, beautiful pipeline on top of data that is fundamentally flawed — and it will never notice.

Example: You are building an automated report that pulls financial data from a department's general ledger to show monthly spending against budget by cost center. Before writing any code, you pull a small sample of the raw data and review it with the finance team. You discover that some cost centers were reorganized mid-fiscal-year, and historical entries still reference the old codes. Catching this before the AI starts generating transformation logic saves you from building a pipeline on a faulty foundation — and from having to explain incorrect numbers to the department chair.


10. Use AI Extensions in Your Development Tools

Whenever possible, use AI extensions or plugins for your preferred development tool (editor or IDE) instead of the web chat interface. When dumping code into a chat window, AI tools may limit how much they actually process. For example, some models will only read the first few hundred lines of pasted code, and after that, they may only look at function names without examining their implementation or details.

When using AI directly within your development environment, the AI has direct access to your files and command-line tools (like grep) to filter for the specific code it needs — without having to load your entire codebase into memory. This can help reduce the number of tokens consumed, giving you more productive time before running into session limits. To minimize the results of code searches even further (and thus the amount of tokens used), create an ignore file (e.g. ".claudeignore" in Claude Code for VSCode) in which you tell the AI assistant to ignore everything except your source files.

Example: The DataLaVista single-file script grew to several thousand lines of JavaScript, HTML, and CSS. Pasting the entire file into a chat window meant the AI would silently ignore large sections of the code, or skip over contents of functions, leading to suggestions that were — you guessed it — beautiful, elegant, and completely wrong for the existing codebase. Switching to an IDE-based AI extension allowed the AI to search the file for relevant functions, understand the full context of a change, and produce edits that worked within the existing code — dramatically reducing the number of iterations needed per fix. The same was true for the QGenda integration, where scheduling logic, retirement modeling, and front-end rendering all lived in a single file, and tokens were being consumed far too quickly.


 

 

Notes

  • These tips apply to all generative AI coding tools, including those available through U-M GPT, Microsoft Copilot, Google AI Studio, Claude, Gemini, and other AI assistants.
  • Always follow University of Michigan data privacy and security policies when using AI tools with sensitive or regulated data. In fact, do not feed real data if you can avoid it.
  • Never paste PHI, PII, or other sensitive data into AI prompts unless you are using a tool and model explicitly approved for that data classification.

 

Resources

 

 

About the Author

Gabriel Mongefranco is a Mobile Data Architect at the University of Michigan's Eisenberg Family Depression Center. Gabriel has over a decade of experience with automation, data analytics, database architecture, dashboard design, software development, and technical writing. He supports U-M researchers with data cleaning, data pipelines, automation and enterprise architecture for wearables and other mobile technologies.

 |   |   | 

 

Details

Details

Article ID: 15135
Created
Thu 3/26/26 9:27 AM
Modified
Thu 4/2/26 12:11 AM
Author(s)
Gabriel Mongefranco
Automators Anonymous
Code Repository
GitHub Code Repository URL

Related Articles

Related Articles (2)

A short tutorial on how to get started with generative AI tools at the University of Michigan, and tips for avoiding common pitfalls. Focus is on U-M GPT and language models GPT 3.5, GPT 4 Turbo, DALL·E 3, and Llama 2.
This article explains what time time dimensions are, why they matter for health research and business data analysis, and how they solve the common but frustrating problem of analyzing time-series data that sits at variable time scales. Ready-to-use code AI assistant prompts to generate these tables is freely available in the EFDC AI Prompt Database on GitHub, with support for multiple platforms including Power BI / Power Query (M), Python, R, SQL, JavaScript, Lua, PowerShell, and Bash.