Meet Greg Robidoux: A Veteran of SQL Server Strategy
When you think of a database administrator who has seen the evolution of SQL Server from the early 2000s to today, Greg Robidoux naturally comes to mind. With nearly fifteen years in the IT arena, Greg began his career in networking before moving into database management - a transition that gave him a unique perspective on how data systems fit into broader enterprise infrastructure.
His early days saw him juggling systems that ranged from a single Sybase instance to Oracle clusters for small firms. It wasn’t until the 2000s that Greg found his niche in Microsoft SQL Server, diving deep into the 6.5, 7.0, and 2000 editions. Those foundational years were marked by hands‑on upgrades, meticulous change‑management scripts, and the first forays into high‑availability setups.
As companies grew, so did the complexity of their database environments. Greg’s portfolio expanded to include enterprises with over a hundred SQL Server instances spread across multiple data centers. In those settings, he tackled challenges from SAN integrations to centralizing management through tools like SQL Server Management Studio and Enterprise Manager.
Beyond the technical, Greg understood that successful database projects hinge on collaboration. He built strong relationships with development teams to align database design with application needs, and he worked closely with infrastructure engineers to ensure that storage, networking, and power budgets matched the demands of mission‑critical workloads.
His role as Vice Chair for the PASS DBA Special Interest Group further illustrates his commitment to the community. Through that position, Greg has mentored junior DBAs, organized workshops, and kept the conversation around best practices fresh and relevant.
When the pandemic forced many organizations to pivot to remote work, Greg’s expertise became even more vital. He helped clients assess their cloud migration plans, ensuring that data protection and compliance requirements were baked into the architecture from day one.
Greg’s approach is pragmatic: start with the business objectives, then layer on the technical safeguards. He advocates for continuous learning - keeping up with new SQL Server features, like Always On Availability Groups, and exploring third‑party tools that streamline backup and recovery processes.
His consulting work, which we’ll explore in depth in the next section, reflects a philosophy that goes beyond surface‑level fixes. For Greg, the goal is to build environments that endure change - whether that change is a hardware failure, a sudden spike in user traffic, or a new regulatory standard.
In short, Greg Robidoux isn’t just a seasoned DBA; he’s a strategic partner who can translate complex technical requirements into clear, actionable plans that protect an organization’s data assets.
Throughout his career, Greg has maintained a single guiding principle: data is an asset, and protecting that asset requires intentional planning, rigorous testing, and ongoing documentation.
Edgewood Solutions: Elevating SQL Server Platforms
Founded in January 2002, Edgewood Solutions emerged with a clear mission: to raise the bar for Microsoft SQL Server implementations by addressing the often‑overlooked components that underpin a reliable database environment.
Edgewood’s founders recognized that while many organizations invest heavily in hardware redundancy and network infrastructure, they frequently neglect the softer aspects - change management, security policies, and performance tuning. These gaps can turn a solid system into a fragile one when unexpected events occur.
The company’s service catalog reflects that insight. Rather than offering a one‑size‑fits‑all solution, Edgewood focuses on developing key building blocks that most DBAs know they need but rarely get around to implementing. This targeted approach ensures that clients receive tailored guidance on:
- Change Management – establishing processes to document, review, and deploy database changes.
- Security Policies – enforcing role‑based access control, encryption at rest, and audit trails.
- Disaster Recovery Planning – creating actionable plans that cover both site‑wide and localized incidents.
- Project Management – using frameworks that align database initiatives with business objectives.
- SQL Server Upgrades – minimizing downtime and ensuring feature parity.
- Maintenance Planning – scheduling backups, index rebuilds, and health checks.
- Performance Analysis and Tuning – leveraging monitoring tools to detect and resolve bottlenecks.
Edgewood’s expertise is amplified by strategic partnerships with specialized vendors. By aligning with solutions like SQL LiteSpeed for rapid, disk‑based backups; Lumigent Log Explorer for detailed transaction log analysis; and Precise Software Solutions for data compression and encryption, Edgewood can recommend tools that fit each client’s specific environment and budget.
Clients appreciate that Edgewood not only recommends products but also assists with their deployment, configuration, and integration into existing workflows. This end‑to‑end support ensures that the chosen tools deliver tangible benefits rather than becoming unused add‑ons.
Edgewood’s reputation is built on a mix of technical depth and practical implementation experience. Their staff includes certified DBAs, performance specialists, and security analysts who collaborate to address every layer of the database stack.
When organizations bring Edgewood on board, they gain more than a consultant; they gain a partner who advocates for a holistic approach to SQL Server management. This partnership often translates into reduced incidents, faster recovery times, and a clearer path to achieving compliance mandates.
Edgewood’s success story is a testament to the power of focusing on the overlooked areas that make a database environment resilient. By combining proven processes with the right tools, Edgewood helps businesses protect their most valuable asset - data.
Why Disaster Recovery Matters Beyond Catastrophic Site Loss
Most conversations around disaster recovery start with the image of a data center flooded or struck by a hurricane. That scenario is certainly dramatic, but it represents only a fraction of the real threats DBAs face on a daily basis.
Unplanned downtime can arise from a power outage, a mis‑delivered script, or a single server failure that ripples across an application tier. When a system goes down for even a few minutes, the impact on revenue, customer trust, and operational momentum can be immediate and significant.
For organizations that operate around the clock, the window for planned maintenance shrinks. The rise of SaaS, global marketplaces, and real‑time analytics has made downtime an even higher risk. In such environments, even a brief interruption can trigger cascading failures, data corruption, or compliance violations.
Because of this, a robust disaster recovery plan must be framed around business availability requirements. A critical first step is defining an acceptable outage window - often expressed as a Recovery Time Objective (RTO) and a Recovery Point Objective (RPO). These metrics drive the design of every subsequent recovery strategy.
When a plan is built to meet those objectives, it covers more than just large‑scale events. It includes procedures for handling localized incidents such as a corrupted table, a failed index rebuild, or a network partition that isolates a subset of servers. Each of these incidents can be addressed quickly if the right backups and failover mechanisms are in place.
Another key aspect often missed in disaster recovery discussions is the human factor. Even the most technically sound plan will fail if the team executing it does not know what to do. That’s why many experts emphasize that a recovery plan is as much about people and processes as it is about technology.
By shifting the focus from “what if the whole site goes down” to “how do we keep the system running when anything goes wrong,” organizations can align their disaster recovery strategy with everyday operational realities. The result is a set of controls that reduce both the likelihood of an incident and the duration of any outage that does occur.
In practice, this means investing in regular drills, creating clear escalation paths, and maintaining up‑to‑date documentation. It also involves staying current with new features in SQL Server - like Always On Availability Groups or Azure Site Recovery - that can simplify recovery and improve resilience.
Ultimately, a well‑thought‑out disaster recovery plan provides peace of mind. It signals to stakeholders that the business is prepared for the unexpected and that data integrity and availability remain top priorities.
Building a Disaster Recovery Blueprint: From Business Needs to Technical Tactics
Creating a disaster recovery blueprint starts with a simple premise: align the recovery strategy with the organization’s critical business functions. That alignment begins by asking two foundational questions - what is the maximum acceptable downtime and what is the budget to support that downtime?
Once those limits are established, the next phase involves cataloging potential risks. These range from hardware failures and power interruptions to human errors and malicious attacks. By listing each risk, DBAs can prioritize mitigation efforts and design specific recovery procedures for each scenario.
Data collection forms the backbone of any recovery plan. This includes documenting the current environment - hardware specifications, software versions, network topology, and user access controls. A standardized inventory format not only streamlines future updates but also serves as a quick reference during an incident.
Key contacts and escalation paths must be clearly defined. A concise list of stakeholders, from system owners to executive sponsors, ensures that the right people are notified immediately when an outage occurs. Escalation levels should outline who has authority to make decisions, how quickly they need to respond, and what actions are required at each level.
Having a media kit that contains all the necessary software versions, service packs, and licensing information simplifies recovery. When an instance needs to be restored or rebuilt, the kit allows DBAs to pull the exact binaries without hunting through version control or vendor portals.
Standardizing server configurations across the fleet reduces complexity during recovery. Whether it’s a consistent OS patch level or uniform SQL Server settings, a baseline configuration ensures that recovery procedures are predictable and repeatable.
Backups are the cornerstone of data protection. Storing them on disk rather than tape speeds up both backup and restore operations. Tools like SQL LiteSpeed can dramatically reduce backup windows while preserving full fidelity through compression and encryption.
Spare hardware - whether a hot standby server or a redundant rack - serves as a safety net when primary equipment fails. The availability of spare units allows DBAs to swap systems quickly without waiting for procurement or shipment.
Communication is an often‑overlooked element. An internal communication plan - defining how alerts are sent, who receives them, and what information is shared - keeps everyone aligned throughout an incident. This is especially critical when multiple teams are involved in the recovery effort.
Testing validates the entire plan. Regular drills that simulate various failure scenarios - such as a complete database loss or a network partition - highlight gaps and provide valuable hands‑on experience for the team. Successful tests build confidence that the plan will perform when it matters most.
Once the plan is live, it is not a set‑and‑forget document. Continuous improvement is essential. As new applications are deployed, new hardware is added, or policies change, the disaster recovery blueprint must be revisited and updated. This dynamic approach keeps the plan relevant and effective.
Third‑party tools can fill gaps that native SQL Server capabilities do not cover. BindView for SQL Server and NetIQ ConfigurationManager help automate the collection of configuration data, while Lumigent Log Explorer provides granular transaction log analysis to accelerate point‑in‑time restores.
In practice, building a disaster recovery blueprint is a disciplined exercise that balances business priorities, technical feasibility, and resource constraints. The result is a living document that protects data, preserves uptime, and gives the organization confidence that it can weather any storm.
Documentation, Testing, and the Human Factor in Disaster Recovery
Documentation is more than a compliance checkbox; it is the single most reliable reference point when a crisis unfolds. A comprehensive disaster recovery guide should contain every piece of information that DBAs and operations staff need to act quickly and correctly.
At its core, the guide must list a contact matrix that assigns responsibility for each recovery step. Names, roles, phone numbers, and email addresses should be current and double‑checked on a quarterly basis. In an incident, a single mis‑dial can delay the entire recovery.
Versioning is equally critical. Each backup, configuration file, and script should have a unique identifier and a log of changes. When restoring a database, knowing the exact backup version and the context in which it was taken (e.g., after a patch or a deployment) prevents inadvertent rollback of recent improvements.
Hardware details - serial numbers, rack locations, and power configurations - should be part of the documentation. This data streamlines asset tracking and ensures that replacement parts can be sourced quickly during a failure.
Server and application priority lists define the order in which services are restored. If an organization relies on a payment gateway, that system takes precedence over a legacy reporting tool. A clear hierarchy prevents confusion when multiple services are impacted simultaneously.
Scenario templates are the heart of a testable plan. They outline step‑by‑step instructions for common incidents: a corrupted table, a lost transaction log, or a full server outage. Each template should include prerequisites, expected outcomes, and success criteria.
Testing is the practice that turns theoretical plans into proven procedures. By conducting tabletop exercises or full‑blown drills, DBAs expose assumptions and discover hidden dependencies. The key to a productive test is realism - simulate the exact conditions, including network latency, resource contention, and human error.
Drills should involve all stakeholders: DBAs, developers, network engineers, and management. Each participant gains a better understanding of their role and how to communicate during an actual outage.
After each test, a post‑mortem should capture lessons learned, updated procedures, and any remaining questions. This feedback loop ensures that the plan evolves to meet the organization’s changing needs.
Myths often undermine disaster recovery efforts. One common misconception is that only full‑site failures justify a plan. In reality, even isolated incidents - such as a single corrupted table - can disrupt business processes. Another myth is that a disaster recovery plan is a static document; instead, it must be treated as a living artifact that is regularly reviewed.
Common mistakes include neglecting to keep documentation current and failing to test the plan. A document that reflects past environments becomes obsolete, and a plan that is never exercised is a plan that never works.
Learning resources abound, from vendor whitepapers to community blogs. For instance, the PASS community hosts webinars on backup strategies, and Microsoft’s documentation offers guidance on configuring Always On Availability Groups. These resources can fill gaps in knowledge and keep teams up to date with best practices.
Ultimately, the human factor is the linchpin of disaster recovery success. Even the most advanced technology cannot compensate for a team that is unprepared, untrained, or unsure of their responsibilities. By investing in clear documentation, regular testing, and open communication, organizations can transform disaster recovery from a theoretical exercise into a proven capability that protects data and sustains operations.





No comments yet. Be the first to comment!