Assessing Documentation Gaps
When a new SQL Server environment comes under a DBA’s care, the first thing that usually hits is the lack of any meaningful documentation. Even in companies that have dozens of databases spread across multiple servers, you’ll often find that the only record of what exists is a handwritten note in a file cabinet or, worse, no record at all. The absence of documentation isn’t just an inconvenience; it can turn a routine audit into a firefight that costs time and money.
Start by cataloguing every instance of SQL Server you can locate. Use PowerShell or a simple command line script to pull a list of SQL instances from the registry or the SQL Server Browser service. Once you have that list, connect to each instance with the highest privilege you can safely use - ideally a domain account with sysadmin rights. From there, run queries against the master database to enumerate the databases, linked servers, and logins. Store the results in a central file so you have a baseline to reference later.
Next, look for configuration settings that could affect your audit. Check the server properties for options such as max server memory, max degree of parallelism, and the sqlagent_startup_type. Document the current state of each setting, and note any deviations from recommended best practices. If the server is running an older version of SQL Server, make a note of that as well - certain features and security patches may be missing.
When you’re pulling database-level information, be sure to include database options, users, and permissions. A quick script that gathers the database name, owner, recovery model, and creation date will give you a snapshot of the environment. For each database, query sys.database_principals and sys.database_permissions to understand who has what level of access. This data becomes critical when you later need to enforce least‑privilege principles.
Another important element is the application code that connects to the databases. Often, application developers store connection strings in web.config files, appsettings.json files, or environment variables. Even if you can’t access every application repository, ask the developers for a list of connection strings, and pay close attention to which logins are referenced. It’s a good idea to store that information in a separate table or spreadsheet, noting the application name, environment (dev, test, prod), and the server/database it targets.
After you’ve collected all the raw data, create a master inventory document. This document should list every instance, the databases it hosts, the logins that exist, and the applications that use those logins. Use a simple table layout so that you can easily search for any server, database, or login. As you work through this process, you’ll begin to see patterns: orphaned databases, logins that only exist in one environment, or applications that reference a login that no longer exists. These are the red flags that demand immediate attention.
Once you’ve established a baseline, take a moment to review the documentation you have created against the company's standard operating procedures or any industry frameworks you are targeting, such as CIS Benchmarks or ISO 27001. This step will help you identify gaps not only in your current environment but also in the organization’s overall documentation practices. If you find that the documentation you’ve created is missing crucial details, you’ll know that a broader cultural shift toward better documentation is needed - something that should be addressed in parallel with the technical audit.
In many cases, the documentation effort can uncover hidden assets that were never intended to be part of the production environment. Perhaps an old test database was left behind, or a login was created for a one‑off script and never removed. Spotting these anomalies early prevents future security or compliance risks. If you discover any such findings, flag them immediately and plan a remediation timeline. The documentation process itself is an exercise in discovery; the more thorough you are, the more confident you can be in the health of the environment you are taking over.
Checking for Weak Passwords with a Quick Audit
SQL Server’s authentication mechanism is a double‑edged sword. It allows for granular control through logins and roles, but it also exposes a weakness: many installations ship with the default sa account left at a blank or trivially simple password. In older SQL Server versions, there is no enforcement of password complexity, and even modern versions can be misconfigured to allow empty passwords. A simple audit can reveal these vulnerabilities before they become a liability.
The tool you’ll use for this audit is a stored procedure that scans all SQL Server logins and flags those that fail basic security criteria. The procedure, spAuditPasswords, runs against the master database and checks three conditions: 1) the password is null; 2) the password matches the login name; and 3) the password is only one character long. While the last condition may seem trivial, it’s a common shortcut that bypasses the password policy engine.
Below is the full script for the procedure. Deploy it to every instance you manage. It creates a temporary table to hold the login names, then outputs three distinct lists, each one corresponding to one of the security checks.
Run the procedure with EXEC dbo.spAuditPasswords. Review the three result sets carefully. For each login that appears in a list, plan a remediation action: either replace the login entirely, change its password to a strong value, or remove it if it serves no purpose. For the sa account, consider disabling it or moving it to a dedicated maintenance role, as it should never be used by applications or developers.
In addition to the custom procedure, it’s worthwhile to enable SQL Server’s built‑in password policy enforcement. You can set the CHECK_POLICY and CHECK_EXPIRATION options when creating or altering a login. The policy will enforce minimum length, complexity, and expiration dates. Use the following example to enforce policy on an existing login:
By combining the audit script with policy enforcement, you create a two‑tier defense: an initial scan to discover weak logins, and a permanent check to keep future passwords strong. Remember to schedule the audit to run regularly - ideally as part of your quarterly security review or after any significant change to the environment. If you notice a login slipping through the policy or a new login is added with a weak password, you’ll catch it before it becomes a real threat.
Fixing Password Weaknesses and Building a Policy
Once the audit identifies vulnerable logins, the next step is remediation. It’s tempting to just change every password, but doing so without a plan can break applications, replication partners, or scheduled jobs. Start by building a remediation plan that considers the impact on each login. For every account flagged, document the processes or applications that rely on it. This inventory will guide you in deciding whether to retire the login, replace it with a stronger alternative, or simply update the password.
A common best practice is to create a dedicated “maintenance” login that has the necessary permissions to run jobs, maintain backups, and perform system administration tasks. Keep the sa account disabled unless it’s absolutely necessary for a specific migration. The maintenance login should use a complex password, have CHECK_POLICY = ON, and be locked out after a certain number of failed attempts. Use the following example to create such a login:
After creating the maintenance account, audit the server to ensure no other accounts hold unnecessary privileges. Drop or disable any that are redundant. If an application relies on the sa account, consider creating a new login for that application and granting it the minimal permissions it needs. This approach follows the principle of least privilege and reduces the attack surface.
Passwords themselves should be managed carefully. Microsoft recommends rotating passwords at least every 90 days for privileged accounts. A practical way to enforce rotation is to set CHECK_EXPIRATION = ON and schedule a reminder or an automated job that prompts the administrator to change the password. For the sa account, a monthly rotation is reasonable because it’s rarely used; the quick reset reduces the window in which an attacker could compromise the account.
Documentation is another critical component of a robust policy. As you make changes, update a central log that records the date, the account affected, the action taken, and any notes about impact on applications or services. Store this log in a version‑controlled repository so that future DBAs can trace the history of changes. It also aids compliance audits by providing evidence that the organization follows a documented process.
Once you’ve applied new passwords and updated privileges, validate the environment. Run your SQL Server Profiler trace to ensure that no login failures occur. Check the event logs for any failed authentication attempts that may indicate an attempt to brute‑force a weak password. If everything passes, you’ve successfully tightened the security posture of the environment and established a repeatable process for future changes.
Performing a Login Audit: Talk, Trace, and Who2
A thorough login audit involves gathering data from three complementary sources: conversations with developers, a SQL Server Profiler trace, and the output of the sp_Who2 stored procedure. Each source provides a different lens on how logins are used, which helps uncover hidden or undocumented usage.
Begin by interviewing developers and application owners. Ask them which logins their code references, and request any configuration files that contain connection strings. Make a note of the environments (development, staging, production) and the corresponding servers. While developers may not remember every detail, the conversation often surfaces obvious gaps - such as a login that is only used in a test environment or one that was created for a one‑off script.
To capture runtime activity, set up a Profiler trace that monitors the Audit Login event class. Add the hostname data column so you can see which machines are connecting to the server. Configure the trace to run continuously or during peak hours, and save the data to a file for later analysis. When reviewing the trace, look for repeated logins from unfamiliar hosts; those might signal an automated job or a rogue script that hasn’t been documented.
Parallel to the trace, create a stored procedure that captures the output of sp_Who2 at regular intervals. This procedure writes the results to a permanent table, preserving a history of active sessions. Below is the script for the procedure, spTrapWho:
Schedule this procedure to run every 15 minutes using SQL Server Agent. After a few days of data collection, query the tSPWho table to identify which logins are active and which applications or hosts they originate from. Combine this information with the results from the Profiler trace and the developer interview to create a comprehensive map of login usage.
Once you have the map, revisit each login that is in use. Verify that the login’s permissions match its intended purpose. If a login appears to be over‑privileged, either tighten the permissions or replace it with a more granular account. If a login is used only in a development environment, consider disabling it on production servers.
Remember that some logins may be used by legacy applications that have hard‑coded connection strings. In those cases, you’ll need to coordinate with the application owner to update the strings or migrate to a new login. Document any changes, including the new password and the expected impact on the application, so future DBAs can review the history without needing to chase down code repositories.
Managing Application and System Dependencies
When you begin to change passwords or retire logins, you’re likely to hit dependencies that aren’t obvious from the audit tables. These include SQL Server Integration Services (SSIS) packages, Distributed Transaction Coordinator (DTC) connections, replication agents, and ODBC data source names (DSNs). Each of these can be locked to a specific login, and altering that login can cause immediate failures.
SSIS packages often store connection managers in the package file, embedding the login name and password. If you change a login’s password, you must update the package’s connection string or replace the login entirely. A practical approach is to create a dedicated “SSIS” login, grant it only the permissions needed by the packages, and then set the package connection managers to use that login. Once the package runs successfully, you can retire any legacy logins used by those packages.
DTC connections rely on the same login credentials that SSIS packages use, but they may be configured through the DTC security policy. Verify that the DTC is set to use the appropriate network access mode, and ensure the Windows account that runs the DTC service has the necessary SQL Server permissions. If you’re moving from a local SQL Server login to a domain account, update the DTC configuration accordingly.
Replication is a particularly sensitive area. When you change the password for the sa account or any login that replication agents rely on, you must rebuild the replication setup. Use the Replication Monitor or the sp_helprepldistributiondb procedure to confirm that the distribution database is healthy. Then run the sp_replreset script on the distributor and the publication to reset the replication topology. After resetting, reinitialize the subscriptions. Document each step so that the next DBA can re‑establish replication without losing data.
ODBC DSNs used by external applications or services may reference the same login as the database owner. To mitigate risk, create a separate DSN with a dedicated login, and update the application’s configuration file to point to the new DSN. Test each DSN in a staging environment before deploying to production, as DSN misconfigurations can cause silent failures that surface only under load.
Finally, consider leveraging SQL Server’s built‑in password expiration notifications. You can set up an email alert that triggers when a password is close to expiration, giving developers time to update their connection strings. Use the Database Mail feature and the sp_send_dbmail procedure to automate this notification. The alert should include the login name, the server it belongs to, and a link to the policy document that explains how to update credentials.
Managing these dependencies is not a one‑time effort. Schedule regular reviews of SSIS packages, DTC settings, replication topology, and DSNs. When you discover a new package or service, capture its login usage immediately. By maintaining an up‑to‑date inventory of these dependencies, you’ll reduce the risk of unexpected downtime when a password is changed or a login is retired.
Practical Tips for Maintaining a Secure Environment
Securing a SQL Server environment is an ongoing task that requires a disciplined approach. Beyond the initial audit and remediation, you need processes that keep the environment healthy and compliant over time. Start by institutionalizing a change‑management procedure for any login or permission modifications. Every change should be logged in a version‑controlled document, and any affected applications must be notified before the change goes live.
Automate as many of the routine checks as possible. Schedule the spAuditPasswords procedure to run nightly and email the output to the DBA team. Use a PowerShell script to compare the current list of logins against a baseline and trigger alerts if any new logins are added. These alerts give you early warning that an application or developer might be adding a new account without proper oversight.
Adopt a naming convention for logins that indicates their purpose and environment. For example, use prefixes like DEV_, TEST_, or PROD_ followed by the application name. A consistent naming scheme helps you quickly identify whether a login is legitimate and where it should exist. When you review the audit logs, you’ll notice any login that doesn’t follow the convention and investigate its origin.
Implement the principle of least privilege in every aspect of your environment. Grant roles such as db_datareader and db_datawriter only to users who need them. Use server roles like dbcreator sparingly, and restrict it to a handful of trusted DBAs. Consider using Application Roles for higher‑level security; an application role can be enabled by the application and automatically assign a set of permissions that the user does not normally have.
Keep your SQL Server instances up to date with the latest cumulative updates and service packs. Microsoft’s security releases often address critical vulnerabilities that could be exploited if left unpatched. Use tools such as SQL Server Update Deployment Planner to schedule and automate the update process. Document each update in a release notes file and include it in the environment’s change log.
Regularly review the SQL Server Agent jobs and scheduled tasks. Verify that each job has a meaningful name, clear documentation, and the correct owner. Jobs that run with elevated privileges should be scrutinized for unnecessary permissions. If a job no longer serves a purpose, remove it to reduce attack surface.
For compliance and audit purposes, maintain a copy of every database backup and log file. Store backups in a secure, off‑site location, and verify the integrity of the backups by performing periodic restores to a test server. This practice ensures that you can recover from data loss and demonstrates adherence to recovery point objective (RPO) and recovery time objective (RTO) goals.
When you’re ready to share your knowledge, consider creating a knowledge base article or a short video walkthrough. The article on this site, TransactSQL.com, contains more than 1,200 examples that can help you master Transact‑SQL. By publishing your own documentation, you reinforce your learning and help others avoid the pitfalls you’ve encountered.
Security is a continuous journey. By establishing a culture of documentation, automation, and rigorous access control, you’ll transform a chaotic environment into a stable, compliant foundation that can support your organization’s growth and innovation.





No comments yet. Be the first to comment!