Practical tools and strategies for compliance professionals managing more than one person can reasonably handle.
You did not choose this job. In most cases, someone handed it to you.
At small and mid-size covered entities and businesses handling personal data, the privacy officer title rarely comes with a team, a budget, or a clear onboarding plan. It comes with a stack of regulatory requirements, a filing cabinet of policies that may or may not reflect current law, and the expectation that you will figure it out.
Your actual workload on any given week might include reviewing a vendor contract for data handling language, responding to an individual rights request, updating a privacy notice to reflect a new state law, answering a staff question about what can be shared with a patient's family member, and preparing for an internal audit. All while doing whatever your original job was before the privacy officer role got added to it.
This is not a criticism of your organization. It is the reality for the majority of privacy officers outside of large health systems and enterprise companies. The regulations were written for organizations with dedicated compliance departments. Most organizations do not have one.
AI does not solve that structural problem. But it does change what one person can reasonably accomplish in a day. Before we get into specific use cases, it helps to understand what working with an AI tool actually looks like, because the value is not obvious until you have done it once.
If you have not used an AI tool for work tasks before, the interaction is simpler than it sounds. You open a tool like Claude, ChatGPT, or a similar platform in your browser. You describe your situation and what you need in plain language. This description is called a prompt. The tool responds with a draft, a summary, an analysis, or whatever you asked for. You read it, evaluate it against what you know, revise it, and use what is useful.
That is the full loop. There is no coding involved, no technical setup for basic use, and no specialized vocabulary required. If you can write an email describing a problem to a colleague, you can write a prompt.
The quality of what you get back depends heavily on how clearly you describe what you need. A vague request produces a vague response. A specific request, one that includes your regulatory context, your organization type, what you already have, and what gap you are trying to fill, produces output that is genuinely useful as a starting point. The prompt library further down this page is built specifically for compliance use cases, so you do not have to figure out the phrasing from scratch.
One realistic expectation to set now: the first output AI gives you is rarely the final version of anything. It is a draft. Your job is to review it, correct what is wrong, and shape it into something that accurately reflects your organization and its obligations. That review step is not optional. But it is almost always faster than starting from a blank page.
Practical use cases across your compliance workload, with examples of what each interaction looks like in practice.
The honest framing first: AI is a drafting and research assistant. It handles cognitive overhead, the work that happens before the work. It produces first drafts, surfaces relevant considerations, summarizes dense regulatory language, and generates starting points that would otherwise take hours to build from scratch.
It does not replace your judgment. It does not know your organization's specific facts, risk posture, or operational context. Any output AI generates requires your review before it becomes a policy, a training document, a letter, or a decision. That accountability stays with you. What changes is how long it takes you to get to a reviewable draft.
Below are the areas where AI delivers the most practical value for a compliance professional managing a full workload. Each includes a brief example of what that interaction looks like in practice.
Writing a new policy from a blank document is time-consuming even when you know exactly what it needs to say. AI shortens that process significantly. You can describe the regulatory requirement, your organization type, and what the policy needs to accomplish, and get a structured first draft back in under a minute. AI is also useful for plain-language rewrites, taking a policy written in regulatory language and producing a version your workforce will actually read and understand.
A privacy officer at a small behavioral health clinic needs to update the organization's minimum necessary policy to reflect a recent workflow change. Rather than rewriting from scratch, she prompts the AI with her organization type, the regulatory requirement under 45 CFR 164.514(d), and the specific gap she needs to address. The AI returns a structured draft in under a minute. She reviews it against the actual regulation, adjusts two provisions that do not reflect how her clinic handles verbal communications, and has a revised draft ready for her compliance committee the same afternoon she would have previously spent just building the outline.
AI-generated policies may reference requirements accurately at a general level but miss jurisdiction-specific nuances or cite regulatory language imprecisely. Verify every citation against the actual regulation at HHS.gov before any document is finalized.
State privacy law is expanding rapidly. Keeping current on what has changed, what is pending, and how new requirements compare to what you already have in place is a significant time drain. AI can summarize guidance documents, compare requirements across state laws, and help you identify which of your existing policies may need updating when a new law takes effect.
A privacy officer supporting a multi-state employer learns that two additional states have enacted comprehensive privacy laws with employee data provisions that may affect the company's HR data handling practices. Rather than reading both statutes in full before knowing where to focus, she asks the AI to compare the key controller obligations under both laws against the framework she already has in place under CPRA, directing it to use official state legislative sources. The AI produces a side-by-side comparison of notice requirements, data subject rights timelines, and sensitive data categories. She uses it to identify the two areas requiring immediate policy attention and schedules the rest for the next quarterly review cycle.
AI knowledge has a cutoff date and may not reflect the most recent regulatory developments. Use AI output as a starting orientation, then confirm current status against official state sources before making compliance decisions.
Building workforce training from scratch, writing scenarios, drafting quiz questions, developing module outlines, is time-intensive work that often gets deprioritized. AI can produce training outlines, generate realistic scenario-based examples, and draft quiz questions at a specified difficulty level in a fraction of the time manual development requires.
A privacy officer at a regional home health agency needs to deliver annual HIPAA privacy training but has no budget for a third-party vendor. He prompts the AI to build a 45-minute training outline for home health aides covering minimum necessary, verbal communications in patient homes, and mobile device handling, directing it to use OCR guidance as its source. He then asks it to generate five scenario-based quiz questions appropriate for non-clinical staff. The outline and draft questions come back in minutes. He reviews them for accuracy, replaces two scenarios with situations more specific to home health workflows, and has a complete training framework ready to build into slides. A process that previously took the better part of a week is done in an afternoon.
Scenarios need to reflect situations your specific workforce actually encounters. Generic scenarios reduce training effectiveness. Review AI output for relevance to your operational context, not just regulatory accuracy.
Reviewing vendor privacy notices, identifying missing or weak language in data processing agreements, and preparing for vendor conversations are all tasks where AI adds measurable value. You describe the type of provision you are reviewing and the gap you are trying to identify, and AI helps you structure your analysis.
A privacy officer receives a data processing addendum from a new HR software vendor. Rather than working through it line by line without a framework, she describes the agreement type and asks the AI to identify what provisions a HIPAA-compliant business associate agreement should include under 45 CFR 164.504(e) and flag common gaps to watch for. She uses the resulting checklist to review the actual document systematically, identifies that the vendor's breach notification timeline exceeds the 60-day outside limit under the Breach Notification Rule, and goes back to the vendor with a specific revision request rather than a general concern.
Do not paste actual vendor contract language into a consumer AI tool. Describe the provision type and the gap you are assessing. The AI does not need the actual document text to help you build a review framework. Section 4 of this page covers input discipline in detail.
Tabletop exercises, response checklists, and notification letter drafts are strong candidates for AI assistance. Having a solid starting point for these materials matters because incident response preparation is exactly the work that gets skipped when a privacy officer is managing a full workload alone.
A privacy officer at a federally qualified health center wants to run a tabletop exercise for the first time but has never designed one. She prompts the AI to generate a realistic breach scenario appropriate for a small primary care setting, along with a facilitator guide and discussion questions that walk the response team through detection, containment, risk assessment, and notification decisions, directing it to draw from OCR breach notification guidance. The AI produces a complete tabletop package. She reviews it, adjusts the scenario to reflect her center's actual EHR system, and runs the exercise the following month. Something she had been meaning to do for two years but could not find the time to build from scratch.
Preparing for an audit involves significant documentation work: organizing evidence, drafting narrative responses, and identifying gaps between what your policies say and what your practices reflect. AI can help structure that preparation and draft narrative responses based on the audit criteria you are working against.
A privacy officer learns his organization has been selected for a state attorney general inquiry related to its consumer health data practices. He uses AI to draft initial responses to each inquiry question using the regulatory language and his policy summaries as inputs, keeping all actual organizational data out of the tool, then reviews and revises each response for accuracy and completeness before attorney review. What would have been a full day of drafting is compressed to a focused two-hour review session.
Understanding AI's limitations is as important as understanding its capabilities.
AI is a capable assistant. It is not a compliance officer, a lawyer, or a regulator. The distinction matters more in this field than in most.
Privacy and security compliance work carries real accountability. When a policy goes out under your organization's name, when a breach notification letter goes to an affected individual, when a risk assessment conclusion gets documented, those are professional judgments with regulatory and legal weight. AI does not share that accountability. You do.
Understanding where AI's limitations are most consequential helps you use it without creating new problems while solving old ones.
Every AI output is generic until you make it specific. A policy draft AI generates does not know your patient population, your workforce size, your existing agreements, your state of domicile, or the gap between what your current policies say and what your operations actually do. The draft is a starting point. Turning it into a document that accurately reflects your organization requires your knowledge of that organization. Skipping that review step is where AI-assisted compliance work creates liability rather than reducing it.
This is the most important limitation to internalize. AI models are trained on large bodies of text, but they can generate confident-sounding regulatory citations that are inaccurate, outdated, or simply fabricated. A policy that cites a CFR provision for a requirement that does not actually appear there is worse than a policy with no citation at all. It signals to an auditor that your compliance program is not grounded in the actual regulation.
A privacy officer asks AI to draft a policy section on workforce sanctions and receives a clean, professional draft that cites a specific CFR provision. The citation looks right. The section number is plausible. But when she checks it against the actual regulation at HHS.gov, the cited provision covers something else entirely. The requirement she needed was in a different subsection. Had she published the policy without checking, she would have distributed a document with a fabricated regulatory basis. The rule is straightforward: verify every regulatory citation AI produces against the source before that document leaves your desk.
Vague prompts produce vague output. If you ask AI to "write a privacy policy," you will get something generic. If you give it the specific regulatory framework, your organization type, the gap you are trying to address, and examples of language that does or does not fit your context, the output improves substantially. Getting useful AI output is a skill worth developing deliberately. The prompt library below gives you a starting point built specifically for compliance use cases.
Minimum necessary determinations, breach risk assessments, responses to complex individual rights requests, these require applying regulatory standards to specific facts in ways that carry professional accountability. AI can help you think through the framework. It cannot make the call.
A privacy officer is working through a four-factor breach risk assessment following a misdirected fax containing PHI. He asks AI to walk him through the assessment framework under 45 CFR 164.402. The AI accurately outlines the four factors and explains how each is typically evaluated. That is genuinely useful. But when he describes the specific facts of the incident and asks whether it constitutes a reportable breach, the AI produces a plausible-sounding conclusion that reflects a general pattern rather than a careful application of the facts at hand. The determination still requires his professional judgment. Treat AI output on these questions as a thinking partner, not a decision-maker.
A privacy officer using AI tools needs to apply the same scrutiny they apply to any vendor handling organizational data.
There is an obvious tension in a privacy officer using AI tools to assist with privacy compliance work. You are the person in your organization responsible for protecting personal information. AI tools, particularly consumer-grade tools, raise legitimate questions about where your inputs go, how they are stored, and whether using them creates a data handling issue you would flag immediately if a vendor presented it to you.
This section is not an argument against using AI. It is an argument for using it the way a privacy officer should: with your eyes open.
The most important habit to build is discipline about what you put into an AI tool. Public or consumer-grade AI interfaces, the kind you access through a browser without an enterprise agreement, typically collect and may use input data for model training or operational purposes depending on the platform's current terms. Those terms change. Assuming your inputs are private because the session ended is not a safe assumption.
Do not paste PHI, PII, internal audit findings, breach details, employee records, vendor contract terms with confidentiality clauses, or any information your organization has an obligation to protect into a consumer AI tool. The fact that you are using the information for compliance purposes does not change your organization's obligations regarding how that information is handled.
Sanitizing your inputs before using AI is a straightforward habit once you build it. Replace real names with placeholders. Describe a scenario in general terms rather than using actual case details. Use document structure and regulatory framing as your input rather than live organizational data. In most compliance drafting tasks, AI does not need real data to produce useful output. It needs context, structure, and regulatory parameters.
A privacy officer needs help drafting a breach notification letter following an incident involving a specific patient. Rather than pasting the actual incident details into the AI tool, she describes the scenario type: a misdirected paper record containing treatment information sent to the wrong recipient, discovered within 24 hours, with no evidence of further disclosure. She asks the AI to draft a notification letter meeting the content requirements under 45 CFR 164.404(c). The AI produces a complete draft she can review and adapt. The actual patient name, record contents, and incident specifics never entered the tool.
If your organization has access to an enterprise AI platform, a version of Claude, ChatGPT, Gemini, or a similar tool procured through a business agreement with appropriate data handling terms, the risk profile is different. Enterprise agreements typically include data processing terms, opt-out of training use, and contractual commitments about data retention. Before using any AI tool for compliance work, know which version you are using and what the data handling terms actually say. If your organization does not have an enterprise agreement and the volume of your compliance work justifies it, that conversation with leadership or IT is worth having.
If you are a covered entity or business associate under HIPAA and you are considering using an AI tool in a workflow that touches PHI, even incidentally, you need to know whether a Business Associate Agreement is in place with that vendor. Some AI platform vendors offer BAAs under enterprise agreements. Most do not offer them for consumer tiers. Using a tool that touches PHI without a BAA in place is a compliance violation regardless of the business purpose.
The practical implication is straightforward: build your AI workflows so that PHI never enters the tool. That is the cleanest solution and the one that does not require you to chase down BAA coverage every time a new tool enters your workflow.
Ready-to-use prompts organized by stage of the compliance calendar. Each prompt is designed to produce research orientation, gap awareness, or process understanding, not finished compliance documents.
Do not include PHI, PII, internal audit findings, breach details, employee records, or confidential vendor contract language in any prompt submitted to a consumer-grade AI tool. Describe scenarios in general terms and use regulatory framing rather than live organizational data. Section 4 covers input discipline in full.
After reviewing any AI output, verify all regulatory citations against the primary source linked in the prompt before relying on the information.
What this produces: A checklist of program components and diagnostic questions grounded in OCR guidance. You learn what a complete program looks like and where your gaps likely are. No documentation is produced, only awareness of what documentation you need.
After reviewing the output: Use the component list to conduct a document inventory of what your program currently has in place. Any gap identified is a documentation priority. Our HIPAA Essentials Library covers the policy, procedure, and operational tool categories most commonly identified as missing during OCR audits.
What this produces: A planning orientation grounded in the actual regulatory framework. Output is conceptual, not a finished plan document. You learn the shape of the work and how to sequence it.
After reviewing the output: Map the priority areas identified against your current documentation inventory. Our Privacy Officer Starter Kit includes the foundational documents most commonly prioritized in the first compliance year.
What this produces: A prioritized list of policy areas most likely to have drifted from current requirements or enforcement focus. You learn where to look, not what your policies should say.
After reviewing the output: Cross-reference the identified policy areas against your current document inventory. Our policy bundles are organized by compliance area to make targeted updates straightforward.
What this produces: A current awareness summary of regulatory developments and a map of which policy areas they affect. You understand what has changed and what categories of documentation may be affected.
After reviewing the output: Confirm identified changes against the Federal Register directly before acting on them. Use the affected policy categories as your update priority list.
What this produces: A research-oriented summary of individual rights requirements and common compliance failures drawn from enforcement activity. You learn what complete and compliant rights request handling looks like and where your program may fall short.
After reviewing the output: Audit your current forms and procedures against the requirements identified. Our individual rights request form bundle covers the full set of access, amendment, restriction, and accounting forms built to current OCR standards.
What this produces: A clear picture of what the regulation requires for training and what OCR evaluates during audits. You learn the compliance standard your training program is measured against.
After reviewing the output: Assess your current training program against the OCR evaluation criteria identified. Our training materials include a Privacy Training deck, Security Awareness deck, and Incident Response deck built to meet OCR documentation expectations.
What this produces: A gap awareness summary organized by workforce role grounded in actual OCR findings. You learn where training programs most commonly fail and what documentation OCR expects to find.
After reviewing the output: Map the role-based gaps identified against your current training delivery and documentation. If your program lacks role-differentiated training records, that is a priority documentation gap before your next audit.
What this produces: A documentation checklist orientation grounded in audit protocol requirements. You learn what records you need to maintain and where documentation programs typically fall short.
After reviewing the output: Audit your current training records against the documentation standard identified. Our training compliance toolkit includes attendance logs, acknowledgment forms, and completion tracking tools built to OCR audit expectations.
What this produces: A research-grounded summary of what a compliant BAA must contain, where agreements most commonly fall short, and which vendor relationships trigger the BAA requirement that organizations miss.
After reviewing the output: Audit your current vendor portfolio against the BAA requirement triggers identified. Our Business Associate Agreement template is built to the required elements under 45 CFR 164.504(e) and includes the provisions most commonly cited in OCR deficiency findings.
What this produces: A gap awareness summary of what a complete vendor oversight program looks like from OCR's perspective, grounded in audit protocol and enforcement activity.
After reviewing the output: Assess whether your current vendor management program includes the oversight components OCR expects. Our vendor management bundle includes a Business Associate Risk Questionnaire and Annual Vendor Review Template built to OCR oversight standards.
What this produces: A research orientation on an area of vendor management that many privacy officers at small and mid-size organizations are not actively managing. You learn that the obligation exists, what it requires, and what questions to be asking.
After reviewing the output: If your current BAA template does not address subcontractor obligations, that is a priority gap. Our Business Associate Agreement template includes the subcontractor flow-down provisions required under the HITECH amendments.
What this produces: A regulation-grounded summary of what the Breach Notification Rule requires across the full response lifecycle. You learn the compliance standard your incident response program is measured against and where programs most commonly fail.
After reviewing the output: Assess whether your current incident response program addresses each notification requirement identified. Our Incident Response bundle includes breach log templates, notification letter frameworks, and response checklists built to OCR standards.
What this produces: A research-grounded understanding of what the four-factor assessment requires and how OCR evaluates its adequacy. You learn the analytical framework and the documentation standard.
After reviewing the output: Confirm your current incident response procedures include a documented four-factor assessment step. Our breach risk assessment tool walks through each factor with documentation prompts built to OCR's evaluation criteria.
What this produces: A gap awareness summary of what a complete incident response program looks like from OCR's perspective, grounded in audit protocol and enforcement findings.
After reviewing the output: Map the program components identified against what your organization currently has documented. Missing components represent audit risk. Our incident response documentation bundle covers the full set of program components most commonly cited in OCR corrective action requirements.
What this produces: A research orientation on an area of incident response frequently underdeveloped at smaller organizations. You learn that workforce reporting culture is an OCR evaluation point and where programs most commonly fall short.
After reviewing the output: Assess whether your workforce training addresses incident identification and internal reporting as explicit competencies. Our workforce training materials include incident reporting scenarios and acknowledgment documentation built to OCR expectations.
What this produces: A research-grounded summary of what OCR evaluates, what documentation they expect to find, and where small and mid-size organizations most commonly fall short.
After reviewing the output: Use the documentation requirements identified to conduct an evidence inventory. Any category without current, regulation-grounded documentation is an audit exposure. Our full library is organized by the program areas OCR evaluates during audits.
What this produces: A process orientation on how OCR investigations unfold and what factors drive enforcement outcomes. You learn what to expect if your organization becomes the subject of a complaint and where enforcement attention is concentrated.
After reviewing the output: Review the enforcement focus areas identified against your current program documentation. The program areas OCR most frequently investigates are the same areas where documentation gaps create the greatest exposure.
What this produces: A prioritized self-assessment orientation grounded in actual audit findings. You learn which program areas carry the highest audit risk and what questions to ask when evaluating each one.
After reviewing the output: Work through the priority areas identified and document what exists, what is missing, and what needs to be updated. Our documentation bundles are organized by program area to make targeted gap remediation straightforward.
What this produces: A research orientation on what OCR considers non-negotiable in a compliant privacy program, drawn directly from the corrective action record. You learn what OCR requires organizations to build or rebuild after an enforcement action.
After reviewing the output: The corrective action requirements identified represent OCR's baseline program expectations. If your program is missing any of those components, you have a documented gap against the standard OCR applies when programs are found deficient. Our library covers every program area commonly appearing in OCR corrective action requirements.
What this produces: A current awareness summary of where OCR enforcement attention is concentrated, grounded in the actual resolution agreement record. You learn which program areas carry the highest current enforcement risk.
After reviewing the output: Verify the enforcement trends identified against the OCR resolution agreements index directly. Use the current enforcement focus areas to prioritize your next program review cycle.
What this produces: A current awareness summary of the regulatory pipeline grounded in primary government sources. You learn what is coming, what program areas it affects, and what type of action may be required.
After reviewing the output: Confirm current regulatory status against the Federal Register directly before acting on any identified development. Pending rules do not require action until finalized, but awareness of what is coming allows for advance planning.
What this produces: A current awareness summary of the state privacy law landscape grounded in official sources. You learn which laws are active or approaching, where they create obligations beyond HIPAA, and which program areas warrant review. Confirm all state law information against official state legislative or attorney general sources before acting on it.
After reviewing the output: If your organization operates across multiple states or handles consumer health data, the gaps identified between your HIPAA program and state law requirements represent a documentation priority. Resources covering documentation frameworks for state privacy law compliance beyond HIPAA are coming soon.
What this produces: A research orientation on a category of privacy law expanding rapidly and not yet tracked as a compliance obligation by many privacy officers at covered entities. You learn what consumer health data laws require, whether your organization is in scope, and where gaps between your HIPAA program and these frameworks are most likely to exist.
After reviewing the output: If your organization falls within the scope of consumer health data laws, the documentation requirements identified go beyond what your HIPAA program covers. Resources covering documentation frameworks for consumer health data compliance are coming soon.
An honest overview of the AI tool landscape for compliance professionals, with the evaluation criteria that matter most.
Not all AI tools are equal in a compliance context, and the version of a tool you use matters as much as the tool itself. Before reviewing any specific tool, two questions should guide your decision. First, what version are you using, consumer or enterprise? Second, what are the data handling terms for that version? Those two questions matter more than brand preference or feature comparisons.
Well suited for regulatory research, document drafting assistance, and nuanced analysis of complex compliance questions. The consumer tier is available at claude.ai. Enterprise versions with data processing terms are available through Anthropic's business offerings. Claude tends to handle lengthy regulatory documents and multi-part compliance questions with precision, though citation verification remains essential regardless of which tool you use.
The most widely recognized general purpose AI tool and capable across all of the compliance use cases described on this page. The consumer tier is available at chatgpt.com. Enterprise versions with data processing commitments are available through OpenAI's business offerings. The platform's widespread adoption means your workforce is more likely to already be familiar with it, which has practical value if you are building AI use into team workflows.
Integrated across Google Workspace in its enterprise configuration, making it a natural fit for organizations already operating within Google's ecosystem. For organizations using Google Workspace under a business agreement, Gemini's data handling terms may already be addressed within existing contractual arrangements. Confirm this with your IT or legal team before treating it as a compliance-safe tool. Enterprise information is available through Google Workspace.
The distinction between consumer and enterprise versions of these tools is not a marketing detail. It is a compliance consideration. Consumer tiers typically operate under terms that permit the platform to use your inputs for model improvement and other operational purposes. Those terms vary by platform and change over time. Enterprise tiers typically include contractual data processing terms, opt-out of training use, and defined data retention commitments.
If your organization does not currently have an enterprise agreement with any AI platform and the volume of your compliance work justifies it, that conversation with leadership or IT is worth having.
A growing number of vendors are building AI tools positioned specifically for compliance and legal use cases. Some offer features like citation grounding, where outputs are linked directly to source documents, and audit trails that support documentation requirements. This category is evolving rapidly and specific tool recommendations would require more current evaluation than a static page can reliably provide.
If you are evaluating compliance specific AI tools, the questions to ask are the same as for general purpose tools: what are the data handling terms, is a BAA available if your workflows touch PHI, and what does the vendor's own privacy program look like. A compliance tool vendor that cannot answer those questions clearly is a vendor worth approaching with caution.
AI produces starting points. Regulation-grounded documentation is what holds up when it matters.
Privacy compliance is not a problem you solve once. It is a program you build, maintain, and defend over time. The regulations change, enforcement priorities shift, your organization evolves, and the documentation that was current two years ago may not reflect what your program looks like today or what OCR expects to find tomorrow.
AI makes that ongoing work more manageable for a privacy officer running a program without dedicated staff support. It compresses research time, accelerates drafting, surfaces gaps you might not have known to look for, and helps you stay current on a regulatory landscape that does not slow down because your calendar is full.
But AI produces starting points, not finished compliance programs. The difference between a rough AI-generated draft and a regulation-grounded, audit-ready document is the same difference it has always been: professional judgment, organizational context, and documentation built to the standard OCR actually applies.
That is what our templates are built for. Every document in the HIPAA Essentials Library was developed against the actual regulatory requirements, not a summary of them. The policies, procedures, forms, training materials, and operational tools in our library give you a regulation-grounded foundation that AI can help you maintain, communicate, and adapt, but cannot replicate from a prompt.
If the prompts on this page helped you identify gaps in your current program, the next step is straightforward.
Every document in the HIPAA Essentials Library is built to the standard OCR applies, not a summary of it. Start with what your program needs most.