Secure ADODB

Unless you’re hosted in Access, your VBA project doesn’t have access to a database engine. If you’re in Excel, it’s easy to treat the host workbook as a database and each worksheet as a table. While we can build an application that uses Excel worksheets to store data, we probably shouldn’t do that. The reasons are many, but primarily (pun …yeah, intended), we want to be able to establish bullet-proof referential integrity between records/tables; while Excel is great for many things, it’s useless for that: it’s the job of a relational database system (RDBMS), not that of any worksheet software, no matter how powerful. Power Query is very much worth looking into, but if you’re building a small CRUD (Create/Read/Update/Delete) application in VBA, you’ll want VBA code responsible for the data access – enter ADODB, …and every pitfall that comes with it.

In this article we will explore a heavily object-oriented solution to querying a database securely with the ADODB library.


Securely?

Querying a database with ADODB is easy: just set up a connection, open it, then execute whatever SQL statement you need through the Connection, and you get the results in a Recordset object:

Dim conn As ADODB.Connection
Set conn = New ADODB.Connection
conn.Open "ConnectionString"

Dim rs As ADODB.Recordset
Set rs = conn.Execute("SELECT Field1, Field2 FROM Table1")

'...

rs.Close
conn.Close

That is great for one-timer, ad-hoc queries: things quickly get messy when you start needing multiple queries, or when your SQL statement needs to be invoked repeatedly with different values:

Dim conn As ADODB.Connection
Set conn = New ADODB.Connection
conn.Open "ConnectionString"

Dim i As Long
For i = 1 To 10
    Dim rs As ADODB.Recordset
    Set rs = conn.Execute("SELECT Field1, Field2 FROM Table1 WHERE Field3 = " & i)
    '...
    rs.Close
Next
conn.Close

This right here – WHERE SomeField = " & i, is making the database engine work harder than it needs to… and it’s costing server-side performance, because as far as the engine knows, it’s getting a different query every time – and thus computes the same execution plan over and over, every time… when it could just be reusing it. Databases are smart. Like, wicked smart… but yeah we still need to ask for the right thing!

Compare to something like this:

Const sql As String = "SELECT Field1, Field2 FROM Table1 WHERE Field3 = ?"
Dim conn As ADODB.Connection
Set conn = New ADODB.Connection
conn.Open "ConnectionString"

Dim i As Long
For i = 1 To 10
    Dim cmd As ADODB.Command
    Set cmd = New ADODB.Command
    cmd.CommandType = adCmdText
    cmd.CommandText = sql
    cmd.Parameters.Append cmd.CreateParameter(Type:=adInteger, Value:= i)

    Dim rs As ADODB.Recordset
    Set rs = cmd.Execute
    '...
    rs.Close
Next
conn.Close

Oh my, so much more code, so little gain – right?

Using ADODB.Command when queries involve a WHERE (and/or VALUES) clause and user-provided (directly or not) values is not only more efficient (the cached execution plan is reused because the command string is identical every time), it’s also more secure. Concatenating user inputs into SQL command strings is a common rookie mistake, and it’s a practice that is way more widespread than it should be (regardless of the language, paradigm, or platform); your code becomes vulnerable to SQL Injection Attacks – something that may or may not be in your threat model, but that inevitably turns into… easily avoidable bugs: think of what might happen if a user entered O'Connor in that LastName field. If you’re thinking “oh that’s easy! I’ll just double-up single quotes, and fixed!“, then you’re playing a needlessly exhausting game of cat-and-mouse with the next thing that will break your clever escaping: the mouse wins.

Abstract thoughts

Much simpler to just use an ADODB.Command every time, and when you need it parameterized, to Append any number of ADODB.Parameter objects to its Parameters collection. Except, it does make a lot of code to write, every time.

What do we do when we see repetitive patterns in code? If you’re thinking “we put it in a function!” then you’re thinking abstraction and that’s exactly the right train of thought.

We’re just going to take this abstraction… and make it an object. Then think of what objects it needs in order to do its job, and abstract these objects behind interfaces too, and take these abstractions in as constructor parameters of our Create “static” factory method. Rinse & repeat until all dependencies are property-injected and all responsibilities are nicely encapsulated into their own classes. It was fun!

I wrote an original version of this functionality little while ago – you can find the original version on Code Review, and see how different/similar it is to this simplified/improved version in our Examples repository on GitHub.

The original was just an ADODB wrapper class though, couldn’t really be unit-tested, and was annoying to maintain because it felt very repetitive. This version is separating the type mappings from the parameter-providing logic, which makes configuring these mappings is done through an object that’s solely responsible for these mappings; it also separates the command from the connection, and abstracts away that connection enough to enable unit testing and cover quite a large part of the API – but most importantly, this version exposes adequate abstractions for the calling code to use and stub in its own unit tests.

VBA code written with this API (and the principles it demonstrates) can easily be fully testable, without ever actually hitting any database.

I can do this in the immediate pane:

?UnitOfWork.FromConnectionString("connection string").Command.GetSingleValue("SELECT Field1 FROM Table1 WHERE Id=?", 1)

I mean, it’s a contrived example, but with a valid connection string, query, and arguments, that’s all we need to get an actual parameterized ADODB command sending that 1 as an actual ADODB parameter, …and the following debug output:

Begin connect...
Connect completed. Status: 1
Begin transaction completed. 
Begin execute...
Execute completed, -1 record(s) affected.
{whatever value was in Field1}
Rollback transaction completed.
Disconnect completed. Status: 1

I made DbConnection listen in on whatever events the ADODB connection is firing, pending the implementation of an adapter to expose some IDbConnectionEvents members – the idea is to end up with client code that can inject its own callbacks and do things like log such messages. In the meantime Debug.Print statements are producing this debug output, but that’s it’s an implementation detail: it doesn’t publicly expose any of these events. It couldn’t, either: the rest of the code needs to work with the IDbConnection interface, and interfaces unfortunately can’t expose events in VBA.


SecureADODB

Some might call it layered spaghetti. Others call it lasagna. I call it well-abstracted code that reads and maintains like a charm and provably works as intended. There is nothing, absolutely nothing wrong with having many class modules in a VBA project: the only problem is… well, the VBE itself:

Project Explorer is making OOP rather painful. In fact it makes any kind of modularization painful.
Code Explorer makes the VBE more OOP-friendly: now you can have folders regrouping modules by functionality rather than just by module type.

Nice, rich APIs involve many related objects, interfaces, methods – members that make up the object model the API’s client code will be working with. As long as we can keep all these classes organized, there’s no problem having many of them.

Before we look at the implementation, let’s review the interfaces and the overall structure.

Only two interfaces aren’t being stubbed for unit tests. IUnitOfWork because as the top-level object nothing in the object model consumes it. It is needed though, because client code can inject it as a dependency of some FooRepository class, and then tests can provide it with a StubUnitOfWork that implements IUnitOfWork.

The other “façade” interface is ITypeMap. This one isn’t really needed (neither is the predeclared instance of AdoTypeMappings or its Default factory method), something felt wrong with the client code without it. While the class is essentially just a dictionary / literally a map, there’s something rather elegant about depending on an ITypeMap rather than some Scripting.Dictionary.

The two dark blue interfaces are abstract factory interfaces, each with a “real” and a “stub” implementation for tests: these are very simple classes whose entire purpose is to create an object of a particular type.

If we consider IParameterProvider an implementation detail of IDbCommandBase, that leaves us with only the core stuff: IDbCommandBase, IDbCommand, and IDbConnection – everything else just revolves around these.

DbCommandBase

The old SqlCommand code had two sets of commands: “Execute” for methods you could pass a Connection to, and “QuickExecute” for methods that created a connection on-the-spot. I decided to split the two behaviors into two distinct implementation of the same interface, and that’s how I ended up with DefaultDbCommand and AutoDbCommand. As I was cleaning up the two new classes, I had to notice these two classes needed a number of common bits of functionality… as would any other implementation of IDbCommand.

In a language that supports inheritance, I would probably make the two classes inherit a third abstract “base” class where I’d implement the IDbCommand interface. In VBA, we can’t derive a class from another, or inherit members from another class: inheritance is flat-out unavailable. There’s an alternative though, and it’s arguably even better than inheritance: composition. We can put the common functionality in a third class, and then have the two implementations take an instance of that “base” class as we would any other dependency – effectively achieving what we wanted out of inheritance, but through composition.

Code is said to be “decoupled” when none of its concrete components are inter-dependent, as is apparent with the solid black “depends on” arrows here. Decoupled components can easily be swapped for other implementations, like …test stubs.

What’s wrong with inheritance?

Don’t get me wrong, inheritance is very cool: with an abstract class you can have templated methods, where a method in the base class (typically a method that implements some interface member) invokes an abstract or virtual method (typically with protected scope) that the inherited class must override and provide an implementation for. Rubberduck uses this pattern in quite a few places (inspections, notably). Without inheritance, it’s just not something that’s possible.

Inheritance is described as a “is a” relationship, while composition is more of a “has a” relationship. This is important, because when the only consideration weighting in favor of inheritance is the need for two classes to share some functionality, it’s exactly why inheritance should not be used.


Decoupling FTW

The “base” class appeared as a need to have a place for IDbCommand implementations to access shared functionality. I wanted to return disconnected recordsets, and retrieving the value of the first field of the first record of a recordset isn’t something that’s glaringly implementation-specific. The other piece of functionality I needed, was a function that creates the ADODB.Command object and adds the parameters.

Because I wanted this class to create the ADODB.Command, I needed it to be able to turn a Variant into an ADODB.Parameter through some mapping, and since I didn’t want my class to be necessarily coupled with that mapping, or anything remotely related to configuring ADODB parameters… I’m property-injecting an IParameterProvider dependency:

Public Function Create(ByVal provider As IParameterProvider) As IDbCommandBase
    Errors.GuardNonDefaultInstance Me, DbCommandBase
    Errors.GuardNullReference provider
    
    Dim result As DbCommandBase
    Set result = New DbCommandBase
    Set result.ParameterProvider = provider
    
    Set Create = result

End Function

Validating the command string / arguments

Since the commands are given an SQL command string to execute, and a ParamArray array of arguments that should have the same number of items as there are ? ordinal parameters in the SQL command string, we have an opportunity to catch a missing or extraneous argument before we even send the command string to the database server. And because this validation logic would have to be the same regardless of what IDbCommand implementation we’re looking at, DbCommandBase makes the best place to put it.

This implementation is probably too naive for a number of edge cases, but sufficient for most: we’re simply counting the number of ? characters in the sql string, and comparing that with the number of elements in the args array. We need to handle errors here, because if the args array is empty, evaluating UBound(args) and/or LBound(args) will throw a “subscript out of range” run-time error 9.

Public Function ValidateOrdinalArguments(ByVal sql As String, ByRef args() As Variant) As Boolean
    On Error GoTo CleanFail
    Dim result As Boolean
    
    Dim expected As Long
    expected = Len(sql) - Len(Replace(sql, "?", vbNullString))
    
    Dim actual As Long
    On Error GoTo CleanFail 'if there are no args, LBound/UBound are both out of bounds
    actual = UBound(args) + (1 - LBound(args))
    
CleanExit:
    result = (expected = actual)
    ValidateOrdinalArguments = result
    Exit Function
CleanFail:
    actual = 0
    Resume CleanExit
End Function

Getting a disconnected Recordset

If we created a database connection, issued a command against it, and received the recordset from ADODB.Command.Execute, and then we close the connection and return that recordset, then the calling code can’t use the data anymore: a connected recordset only works if the calling code owns the connection. So we need a way to issue a disconnected recordset, while still using an ADODB.Command. The way to do this, is to pass the command as the Source argument to Recordset.Open, and to use a static, client-side cursor:

Private Function GetDisconnectedRecordset(ByVal cmd As ADODB.Command) As ADODB.Recordset
    Errors.GuardNullReference cmd
    Errors.GuardNullReference cmd.ActiveConnection
    
    Dim result As ADODB.Recordset
    Set result = New ADODB.Recordset
    
    result.CursorLocation = adUseClient
    result.Open Source:=cmd, CursorType:=adOpenStatic
    
    Set result.ActiveConnection = Nothing
    Set GetDisconnectedRecordset = result
End Function

Getting a single value result

With functions to validate the parameters, create commands and get a disconnected recordset, we have everything we need for IDbCommand implementations to do their job, but if we leave it like this, we’ll end up with all implementations copying the logic of IDbCommand.GetSingleValue: best have that logic in DbCommandBase and avoid as much repetition as possible.

Private Function GetSingleValue(ByVal db As IDbConnection, ByVal sql As String, ByRef args() As Variant) As Variant
    Errors.GuardEmptyString sql
    
    Dim cmd As ADODB.Command
    Set cmd = CreateCommand(db, adCmdText, sql, args)
    
    Dim results As ADODB.Recordset
    Set results = GetDisconnectedRecordset(cmd)
    
    GetSingleValue = results.Fields.Item(0).value
End Function

Creating the command

A few things can go wrong when creating the ADODB.Command object: we need an ADODB.Connection that’s open, and the parameters must be valid. Since we’re not executing the command just yet, we don’t have to worry about everything that could go wrong actually executing the command string and processing the parameters on the server. So the strategy here is to guard against invalid inputs as much as possible, and then to handle errors when we add the parameters, and return the Command object with whatever parameters were successfully added. We don’t need to try salvaging the rest of the parameters if one blows up, since that failing parameter will fail command execution anyway, but there isn’t much we can do about it, other than perhaps throw an error and have the caller not even try to run the command – but here I decided that the server-side errors would be more useful than any custom “invalid parameter” error.

Note that the ADODB.Command object is actually created by the method-injected IDbConnection dependency. This creates a seam between the class and ADODB, despite the inherent coupling with the ADODB.Command type: it makes the command’s ActiveConnection an implementation detail of IDbConnection.CreateCommand, and that’s all I needed to make this method work with a stub connection that isn’t actually connecting to anything:

Private Function CreateCommand(ByVal db As IDbConnection, ByVal commandType As ADODB.CommandTypeEnum, ByVal sql As String, ByRef args() As Variant) As ADODB.Command
    Errors.GuardNullReference db
    Errors.GuardEmptyString sql
    Errors.GuardExpression db.State <> adStateOpen, message:="Connection is not open."
    Errors.GuardExpression Not ValidateOrdinalArguments(sql, args), message:="Arguments supplied are inconsistent with the provided command string parameters."
    
    Dim cmd As ADODB.Command
    Set cmd = db.CreateCommand(commandType, sql)
    
    On Error GoTo CleanFail
    Dim arg As ADODB.Parameter
    For Each arg In this.ParameterProvider.FromValues(args)
        cmd.parameters.Append arg
    Next
    
CleanExit:
    Set CreateCommand = cmd
    Exit Function
CleanFail:
    Resume CleanExit
End Function

DbCommand

As mentioned before, there are two implementations for the IDbCommand interface: one that creates and owns its own IDbConnection, the other that takes it in as a dependency.

This abstraction represents an object that can take an SQL statement and parameters, and return the result(s) to its caller.

DefaultDbCommand receives its IDbConnection dependency through property injection in its Create factory method.

AutoDbCommand takes a connection string and an IDbConnectionFactory instead.

UnitOfWork uses a DefaultDbCommand because the unit of work needs to own the connection, but AutoDbCommand could be used instead of a unit of work, if we just need a quick SELECT and no transaction.

Abstract Factory

IDbConnectionFactory is an Abstract Factory here. This is needed, because unit tests need to be able to inject a stub factory that produces stub connections: an abstract factory is a factory interface that creates objects of a type that is also an abstraction – in this case, IDbConnectionFactory.Create returns an IDbConnection object. Implementing this factory class is exactly as simple as you’d think – here’s DbConnectionFactory:

'@Exposed
'@Folder("SecureADODB.DbConnection")
'@ModuleDescription("An implementation of an abstract factory that creates DbConnection objects.")
Option Explicit
Implements IDbConnectionFactory

Private Function IDbConnectionFactory_Create(ByVal connString As String) As IDbConnection
    Set IDbConnectionFactory_Create = DbConnection.Create(connString)
End Function

And here’s StubDbConnectionFactory:

'@Folder("Tests.Stubs")
'@ModuleDescription("A stub acting as a IDbConnectionFactory implementation.")
Option Explicit
Implements IDbConnectionFactory
Private Type TInvokeState
    CreateConnectionInvokes As Long
End Type
Private this As TInvokeState

Private Function IDbConnectionFactory_Create(ByVal connString As String) As IDbConnection
    this.CreateConnectionInvokes = this.CreateConnectionInvokes + 1
    Set IDbConnectionFactory_Create = New StubDbConnection
End Function

Public Property Get CreateConnectionInvokes() As Long
    CreateConnectionInvokes = this.CreateConnectionInvokes
End Property

The test stub is more “complex” because it tracks method invocations, so that tests can know whether & how many times any given member was invoked during a test run.

The Abstract Factory pattern is very useful with Dependency Injection: it gives us an abstraction to inject when a class needs a dependency that just cannot be injected when the object is created – the alternative would be tight coupling: if we weren’t injecting a connection factory, then the command class would’ve had to be the one invoking DbConnection.Create – tightly coupling it with the DbConnection class and instantly making unit testing impossible. An abstract factory removes the coupling and allows unit tests to inject an alternative/stub implementation of the factory that creates StubDbConnection objects.

Wrapping it all up

AutoDbConnection can very well be consumed as-is by the client code:

Dim results As ADODB.Recordset
Set results = AutoDbConnection.Create(connString, New DbConnectionFactory, DbCommandBase.Create(AdoParameterProvider.Create(AdoTypeMappings.Default))).Execute(sql)

The only problem is that, well, the dependencies need to be resolved somehow, and that means the client code is now responsible for wiring everything up. While each component has a clear purpose, explicitly creating all these objects quickly gets old and redundant: we need an object that simplifies this – enter IUnitOfWork, and now we can use this much simpler code:

Dim results As ADODB.Recordset
Set results = UnitOfWork.FromConnectionString(connString).Command.Execute(sql)

Unit of Work is a design pattern that encapsulates a transaction: each individual operation can succeed or fail, and the unit of work either succeeds or fails as a whole. These notions are abstracted in the IUnitOfWork interface:

'@Folder("SecureADODB.UnitOfWork")
'@ModuleDescription("Represents an object encapsulating a database transaction.")
'@Interface
'@Exposed
Option Explicit

'@Description("Commits the transaction.")
Public Sub Commit()
End Sub

'@Description("Rolls back the transaction.")
Public Sub Rollback()
End Sub

'@Description("Creates a new command to execute as part of the transaction.")
Public Function Command() As IDbCommand
End Function

When a UnitOfWork is created, it initiates a database transaction. When it is destroyed before the transaction is committed, the transaction gets rolled back and from the database’s point of view, it’s like nothing happened.

Transaction?

If you’re unfamiliar with database transactions, there’s an easy example to illustrate what they do: imagine you have an Accounts table, and you’re processing a transfer – you need to UPDATE the record for the source account to deduct the transfer amount, then UPDATE the record for the destination account to add the transferred amount. In a happy world where everything goes well that would be the end of it… but the world is a cruel place, and assuming the 1st command goes through, nothing guarantees nothing will blow up when sending the 2nd command. Without transactions, the funds would simply vanish: they’re gone from the first account, and they were never added to the second account. With a transaction, we can rollback everything when the 2nd operation completes, no funds vanish and the data is exactly the way it was before the transaction started.


Again, the implementation is pretty straightforward – the only peculiarity is that the class has two factory methods – one named Create that takes all the dependencies in, and another named FromConnectionString that conveniently wires up a default set of dependencies (and then passes them to the Create method to avoid duplicating code).

'@Folder("SecureADODB.UnitOfWork")
'@ModuleDescription("An object that encapsulates a database transaction.")
'@PredeclaredId
'@Exposed
Option Explicit
Implements IUnitOfWork
Private Type TUnitOfWork
    Committed As Boolean
    RolledBack As Boolean
    Connection As IDbConnection
    CommandFactory As IDbCommandFactory
End Type
Private this As TUnitOfWork

'@Description("Creates a new unit of work using default configurations.")
'@Ignore ProcedureNotUsed
Public Function FromConnectionString(ByVal connString As String) As IUnitOfWork
    
    Dim db As IDbConnection
    Set db = DbConnection.Create(connString)
    
    Dim provider As IParameterProvider
    Set provider = AdoParameterProvider.Create(AdoTypeMappings.Default)
    
    Dim baseCommand As IDbCommandBase
    Set baseCommand = DbCommandBase.Create(provider)
    
    Dim factory As IDbCommandFactory
    Set factory = DefaultDbCommandFactory.Create(baseCommand)
    
    Set FromConnectionString = UnitOfWork.Create(db, factory)
    
End Function

'@Inject: just an idea.. see #https://github.com/rubberduck-vba/Rubberduck/issues/5463
Public Function Create(ByVal db As IDbConnection, ByVal factory As IDbCommandFactory) As IUnitOfWork
    Errors.GuardNonDefaultInstance Me, UnitOfWork
    Errors.GuardNullReference factory
    Errors.GuardNullReference db
    Errors.GuardExpression db.State <> adStateOpen, message:="Connection should be open."
    
    Dim result As UnitOfWork
    Set result = New UnitOfWork
    Set result.CommandFactory = factory
    Set result.Connection = db
    
    Set Create = result
End Function

'@Inject: this member should only be invoked by Me.Create, where Me is the class' default/predeclared instance.
'@Ignore ProcedureNotUsed: false positive with v2.5.0.5418
Friend Property Set Connection(ByVal value As IDbConnection)
    Errors.GuardDoubleInitialization this.Connection
    Errors.GuardNullReference value
    Set this.Connection = value
    this.Connection.BeginTransaction
End Property

'@Inject: this member should only be invoked by Me.Create, where Me is the class' default/predeclared instance.
'@Ignore ProcedureNotUsed: false positive with v2.5.0.5418
Friend Property Set CommandFactory(ByVal value As IDbCommandFactory)
    Errors.GuardDoubleInitialization this.CommandFactory
    Errors.GuardNullReference value
    Set this.CommandFactory = value
End Property

Private Sub Class_Terminate()
    On Error Resume Next
    If Not this.Committed Then this.Connection.RollbackTransaction
    On Error GoTo 0
End Sub

Private Sub IUnitOfWork_Commit()
    Errors.GuardExpression this.Committed, message:="Transaction is already committed."
    Errors.GuardExpression this.RolledBack, message:="Transaction was rolled back."
    On Error Resume Next ' not all providers support transactions
    this.Connection.CommitTransaction
    this.Committed = True
    On Error GoTo 0
End Sub

Private Function IUnitOfWork_Command() As IDbCommand
    Set IUnitOfWork_Command = this.CommandFactory.Create(this.Connection)
End Function

Private Sub IUnitOfWork_Rollback()
    Errors.GuardExpression this.Committed, message:="Transaction is already committed."
    On Error Resume Next ' not all providers support transactions
    this.Connection.RollbackTransaction
    this.RolledBack = True
    On Error GoTo 0
End Sub

Errors

If you paid close attention to the code listings so far, you likely already noticed the many Errors.GuardXxxxx member calls scattered throughout the code. There are probably as many ways to deal with custom errors as there are VBA classes out there, this is one way. Probably not the best way, but it feels “just right” for me in this case and I think I like it enough to keep using it until the problems it creates become clearer (there’s always something). Errors is a standard private module in the project, that defines custom error codes. Okay I was lazy and deemed SecureADODBCustomError all I needed, but it could also have been an Enum with descriptive names for each custom error code.

The module simply exposes a small number of very simple Sub procedures that make it easy for the rest of the code to raise meaningful custom errors:

'@Folder("SecureADODB")
'@ModuleDescription("Global procedures for throwing common errors.")
Option Explicit
Option Private Module

Public Const SecureADODBCustomError As Long = vbObjectError Or 32

'@Description("Re-raises the current error, if there is one.")
Public Sub RethrowOnError()
    With VBA.Information.Err
        If .Number <> 0 Then
            Debug.Print "Error " & .Number, .Description
            .Raise .Number
        End If
    End With
End Sub

'@Description("Raises a run-time error if the specified Boolean expression is True.")
Public Sub GuardExpression(ByVal throw As Boolean, _
Optional ByVal Source As String = "SecureADODB.Errors", _
Optional ByVal message As String = "Invalid procedure call or argument.")
    If throw Then VBA.Information.Err.Raise SecureADODBCustomError, Source, message
End Sub

'@Description("Raises a run-time error if the specified instance isn't the default instance.")
Public Sub GuardNonDefaultInstance(ByVal instance As Object, ByVal defaultInstance As Object, _
Optional ByVal Source As String = "SecureADODB.Errors", _
Optional ByVal message As String = "Method should be invoked from the default/predeclared instance of this class.")
    Debug.Assert TypeName(instance) = TypeName(defaultInstance)
    GuardExpression Not instance Is defaultInstance, Source, message
End Sub

'@Description("Raises a run-time error if the specified object reference is already set.")
Public Sub GuardDoubleInitialization(ByVal instance As Object, _
Optional ByVal Source As String = "SecureADODB.Errors", _
Optional ByVal message As String = "Object is already initialized.")
    GuardExpression Not instance Is Nothing, Source, message
End Sub

'@Description("Raises a run-time error if the specified object reference is Nothing.")
Public Sub GuardNullReference(ByVal instance As Object, _
Optional ByVal Source As String = "SecureADODB.Errors", _
Optional ByVal message As String = "Object reference cannot be Nothing.")
    GuardExpression instance Is Nothing, Source, message
End Sub

'@Description("Raises a run-time error if the specified string is empty.")
Public Sub GuardEmptyString(ByVal value As String, _
Optional ByVal Source As String = "SecureADODB.Errors", _
Optional ByVal message As String = "String cannot be empty.")
    GuardExpression value = vbNullString, Source, message
End Sub

Most of these procedures are invoked as the first executable statement in a given scope, to raise an error given invalid parameters or internal state, such as these:

Private Sub IUnitOfWork_Commit()
    Errors.GuardExpression this.Committed, message:="Transaction is already committed."
    Errors.GuardExpression this.RolledBack, message:="Transaction was rolled back."
    On Error Resume Next ' not all providers support transactions
    this.Connection.CommitTransaction
    this.Committed = True
    On Error GoTo 0
End Sub

Consistently raising such errors is the single best way to ensure our objects are always in a known and usable state, because we outright forbid them to be invalid. These validation clauses are called guard clauses, hence the GuardXxxxx procedure names.

A lot of the unit tests simply verify that, given the specified conditions, the expected error is raised:

'@TestMethod("Factory Guard")
Private Sub Create_ThrowsIfNotInvokedFromDefaultInstance()
    On Error GoTo TestFail
    
    With New AutoDbCommand
        On Error GoTo CleanFail
        Dim sut As IDbCommand
        Set sut = .Create("connection string", New StubDbConnectionFactory, New StubDbCommandBase)
        On Error GoTo 0
    End With
    
CleanFail:
    If Err.Number = ExpectedError Then Exit Sub
TestFail:
    Assert.Fail "Expected error was not raised."
End Sub

If each guard clause has a unit test, then the tests are effectively documenting how the objects are meant to be used. With more specific custom errors, the tests would be more accurate, but there’s a point where you need to look at what you’ve got and say “I think I can work with that”, and move on.


Audience

Obviously, one doesn’t import 20 classes into their VBA project just to send one ADODB command to a database server. However if you’re maintaining a VB6 application that uses ADODB all over the place, leaks connections, leaves recordsets dangling, …then importing this API can really help tighten up the data access code in that legacy app. Or maybe you’re writing a complex data-driven system in VBA for Excel because that’s all you’ve got, and a UnitOfWork abstraction makes sense for you.

The goal here is mostly to 1) demonstrate proper usage of ADODB.Command for secure, parameterized queries, and 2) demonstrate that Classic VB (VB6/VBA) has always had everything everyone ever needed to write full-blown object-oriented code that leverages abstraction, encapsulation, and polymorphism – making it possible to write clean and fully unit-tested code.

…and of course, it makes a great practical application of the OOP concepts discussed in many other articles on this blog. Studying the code in this project gives you insight on…

  • OOP foundations: abstraction, encapsulation, polymorphism.
  • SOLID principles: single responsibility, dependency inversion, etc.
  • DI techniques: property injection, abstract factory.
  • Unit testing: what to test, how to test, stubbing dependencies, etc.
  • Using custom errors, guard clauses, input validation.
  • Leveraging Rubberduck annotations, minimizing inspection results.

“I’m not a programmer”

If you’re an accountant, a sales analyst, or in any other office position where writing VBA code helps you do your job faster, you may have uttered the words “I’m not a programmer” before, and that wouldn’t have been wrong: once the code is written, you’d tweak it every once in a while to fix a bug here or there, and then move on to do your actual job as soon as things look like they work as they should. If you’re finding that as weeks and months pass, you’re spending more and more time debugging that code, it might pay off to learn a bit more about how “actual programmers” do things, but be warned: it’s a bit of a rabbit hole – in a good way, but still, it goes as deep as you’re willing to go.

If you’re a consultant delivering advanced Excel solutions to your business clients, you may have thought, said, or written the same words, too. But you like your worksheets efficient, flexible, reliable, easy to maintain; over the years you’ve become an expert at dissecting and modeling a business problem into a fine solution that will live on and grow with the client. You are a “power user”, a professional, and it shows.

Writing code involves exactly the same identical problem-solving thought process: dissecting a problem into small steps, and modeling it into …a sequence of executable statements. When you use a SUM function in a worksheet, you assess whether the range of cells you’re adding up will need to grow over time, and you make sure it’s as simple as possible to add or insert a new row without breaking the integrity of the worksheet: you’re not just solving the problem at hand, you’re anticipating the extension points, facilitating them, making it harder to break things. You shade the background of cells intended for data entry, use borders around fields, conditional formatting, and data validation to ensure everything is obvious and remains consistent; you source data validation lists from named ranges pointing to a column in a table, so that adding new possible valid entries is easy as pie and requires no other step than… adding the new possible valid entries in the table. Next to another table that requires a particular sort order because it’s being used by dynamic named ranges that source data validation lists for co-dependent dropdowns, you might put up a very obvious formatted shape with an inner text that explains why that table needs to be sorted by this column then that column, and what happens to the associated validation dropdowns when the sort is broken.

See, you are a programmer. Worksheets are programs – even more intensely so with the amazing new features coming to Excel: dynamic arrays are changing the entire paradigm and turning the very thinking of worksheet functions into something that really isn’t very far from the mindset you’d have in functional programming.

So why is it that VBA code is so often seen as merely a sequence of executable statements then? Why is it that “it works, therefore it’s good enough” is so often where the bar is? You could have made that data validation list work off a standard range of cells in some (hidden?) column somewhere off-screen, and that would have worked too… but “well, it works” isn’t where the bar should be at, and you know it.

Depending on what the code needs to do, VBA code can become much more than just a macro once you start not just solving the problem at hand, but also anticipating the extension points, facilitating them, making it easier to maintain, and harder to break things. If writing code is part of what you do for a living, then you might as well write good code. Good code isn’t just code that works. It’s code that adheres to a number of language-agnostic principles and modern-day guidelines that even plain procedural code should follow. It’s code that Joe from accounting probably couldn’t have written by themselves, but that they could likely read and understand (at least at the higher abstraction levels), and if they know enough VBA to be dangerous, they could likely even maintain and extend it!

I’m not saying every piece of VBA code needs OOP and dependency injection and inversion of control and 20 class modules with 1 method each ought to get involved in sorting a ListObject. Just that maybe, VBA code would be a little less dreadful to the eventual IT staff inheriting it, if instead of saying “I’m not a programmer”, we cared about the quality of our code in the exact same way we care about the quality of our worksheets and dashboards, or Access databases and reports, or whatever it is that we’re doing.

The Macro Recorder Curse

The macro recorder is a wonderful thing. It’s one of the tools at your disposal to assist you in your journey, a good way to observe code that does exactly what you just did, and learn what parts of the object model to use for doing what. The problems begin when you see macro recorder code constantly invoking Range.Select and Worksheet.Activate, working against the Selection object, generating dozens of repetitive statements and redundant code that can easily trick a neophyte into thinking “so that’s how it’s done!” – the macro recorder is a great way to familiarize with a number of APIs, …and that’s all it needs to be.

There are very few ways to write more inefficient and bug-prone code, than to use the macro recorder’s output as a model of how VBA code should be written. How to avoid Select and Activate has to be the single most linked-to post in the VBA tag on Stack Overflow, yet an untold number of young souls remain to be saved from the curse of the macro recorder.

Of course, we have all the tools we need to defeat that curse. I’m not going to repeat everything in that very good SO thread, but the crux of it boils down to, in my opinion, a few simple things.

Early Binding and Guard Clauses

From an automation standpoint, Selection is an interesting object. In the Excel object model, Selection is a Shape that’s selected, the Chart you just clicked on, the Range of cells you navigate to. If the current selection is relevant to your code, consider making it an input with an explicit object type: the selection you’re expecting very likely has a very specific type, like a Range. Simple example:

Public Sub MyMacro()
    Selection.Value = 42 'multiple possible errors
End Sub

If Selection is pulled from that code and taken in as a Range parameter instead, we eliminate all ambiguities and restore the natural balance of the universe by coding against the Range interface rather than against Object – which means compile-time validation and IntelliSense:

Public Sub MyMacro()
    If Not TypeOf Selection Is Excel.Range Then Exit Sub '<~ that's a *guard clause*
    DoSomething Selection
End Sub

Private Sub DoSomething(ByVal target As Range)
    target.Value = 42
End Sub

Note the similarities between MyMacro in the first snippet, and DoSomething in the second one – it’s what they do differently that makes… all the difference. Now the procedure can work with any Range object, whether it’s actually selected or not.

Working with Selection is never really needed: what you can do against Selection you can do with any Range if what you mean to work with is a Range, or any Chart if what you mean to work with is a Chart.

It might look like it’s more code, more complicated to write – it might even be. But binding these types at compile-time makes things much simpler, really. When we make a member call against Object, the compiler doesn’t even care that the member exists. This involves overhead at run-time, and a non-zero chance of error 438 being raised when the member does not exist. Like Variant, Object is very flexible… too much for its own good.

A member call against Selection is inherently late-bound. Exactly like dynamic in C#, you want to break out of it, as soon as possible: if you’re expecting a Range, declare a Range and run with it, be explicit. In turn, you’ll be rewarded with VBA blowing up with a type mismatch error (13) early on, rather than with some object doesn’t support property or method error (438), possibly far removed from where the problem really stems from – the instruction that wrongly assumed a Selection could be treated like any old Range:

Public Sub MyMacro()
    Dim cell As Range
    Set cell = Selection '<~ type mismatch if Selection isn't a Range
End Sub

The macro recorder will never generate a single control flow statement. No loops, no conditionals, no variables, no structure. If you take something that’s tedious, and make a computer do it tediously for you, you may be “getting the job done”, but you’re 1) being very mean to your computer, and 2) you could easily be misusing the object model in ways that make it inefficient. The best way to tell a computer to do something 20 times isn’t to tell it 20 times to do one thing!

By simply introducing/declaring variables to hold the Worksheet and Range objects we’re working with, we eliminate the need to Select and Activate everything we touch, and Selection becomes useless, for the most part – if your macro is kicked off with a keyboard shortcut and works with whatever is selected at that time, more power to you – but that doesn’t mean your entire code needs to work with Selection, only that you need to validate what’s selected and switch to early binding as early as possible, using properly typed local variables.

Use Range.Select when you need to programmatically visually select cells in the worksheet, for the user to see that selection being made and later perhaps interacted with.

On Error Resume Next

Despite everything that’s been written about it, sometimes On Error Resume Next is the perfect tool for the job. Say you have a ListObject on Sheet1, that would be named Table1; you could have a Table1 property that would return this ListObject:

'@Description("Gets the 'Table1' ListObject from this sheet.")
Public Property Get Table1() As ListObject
  Set Table1 = Me.ListObjects("Table1")
End Property

The only problem, is that when the Table1 table is inevitably renamed (right?) in Excel, this property starts raising run-time error 9, because the ListObjects collection of Sheet1 doesn’t contain an item named "Table1" anymore, and throwing [an error] from a Property Get procedure goes against property design best practices. Enter On Error Resume Next:

'@Description("Gets the 'Table1' ListObject from this sheet, or 'Nothing' if not found.")
Public Property Get Table1() As ListObject
  On Error Resume Next ' suppresses possible error 9...
    Set Table1 = Me.ListObjects("Table1") ' ...that would be raised while evaluating RHS
  On Error GoTo 0 ' execution would jump here before the Table1 reference is assigned
End Property

Error handling is promptly explicitly restored with On Error GoTo 0, and the property will now return Nothing and defer to the caller the decision of what to do with an invalid table:

Dim table As ListObject
Set table = Sheet1.Table1
If table Is Nothing Then Exit Sub ' or raise an error? MsgBox?
'...safely carry on...

A nice side-effect of this, is that it’s very compelling for the calling code to capture the returned value into a local variable – and leaving the complexities of caching concerns to the calling code (you don’t want/need to dereference that ListObject from the worksheet’s ListObjects collection every single time you need to access it!) makes it much less likely to run into an awkward situation such as this:

'@Description("Gets the 'Table1' ListObject from this sheet.")
Public Property Get Table1() As ListObject
  Static cache As ListObject
  If cache Is Nothing Then
    Set cache = Me.ListObjects("Table1")
  End If
  Set Table1 = cache
End Property

...
  Debug.Print Me.Table1.Name
  ...
  Me.ListObjects(1).Delete
  ...
  Debug.Print Me.Table1.Name ' error 424 "object required"
...

Assuming the table initially exists on the worksheet, the first call to Me.Table1 sets the cache reference and the calling instruction outputs the table’s name. In the second call to Me.Table1, the cache reference is already set, it’s not Nothing anymore – the object pointer is zombified: there’s a pointer, but the object itself is gone, and the corrupted cache state might very well be persisted until an End instruction kills the entire execution context. And that’s why cache invalidation is up there together with naming things at the top-2 of hard things in programming… but I digress.

On Error Resume Next + Conditionals = Trouble

Of all the things that could go wrong with generously suppressing error handling by spraying On Error Resume Next all over the code base, is that potentially catastrophic bugs like below can happen – what’s the MsgBox saying, and what icon does it have? Is that clearly intentional?

Public Sub AntiExample()
  'On Error GoTo Rubberduck
  On Error Resume Next
  ' ...code...
  Sheet1.Range("A1").Value = CVErr(xlErrNA) ' A1 contains a #N/A error value
  ' ...code...
  ' ...
  ' ...more code...
  ' ...
  If Sheet1.Range("A1").Value = 42 Then
    MsgBox Err.Description, vbCritical
    Exit Sub
  Else
    MsgBox Err.Description, vbExclamation
    Exit Sub
  End If
  Exit Sub
Rubberduck:
  MsgBox Err.Description, vbInformation
End Sub

Did you guess it right? I’m only going to tell you that Resume Next can be extremely dangerous if it’s used wrong: a MsgBox is harmless, but that conditional block could contain anything. When in doubt, On Error GoTo Rubberduck, but if you choose to use Resume Next anyway, there’s an inspection that can warn you when it is used without being paired with On Error GoTo 0 (in most common scenarios anyway) – but it’s not that inspection’s job to tell you there’s too much code between On Error Resume Next and On Error GoTo 0: that is entirely up to you… but the way I see it, OERN is great for fencing a single potentially problematic statement – and that is easier to do when the procedure is responsible for very few things: when you start having multiple potential errors to handle in the same scope, it’s past time to think about increasing the abstraction level and moving code to smaller procedures that do fewer things.

Pragmatically speaking, if used correctly, On Error Resume Next does not really need to be paired with On Error GoTo 0: execution state is always local to the procedure (/stack frame), and with On Error Resume Next the execution state will never be in error after the procedure exits. In a sense, specifying it explicitly is a bit like specifying an explicit Public access modifier, or an explicit ByRef specifier on a parameter declaration: it’s the implicit default, made explicit – on the other hand, it’s much cleaner to exit a procedure with Err.Number being 0, consistent with the execution/error state.

When you see On Error Resume Next at the top of a rather large procedure, comment it out and run the code if possible; see what errors are being silently shoved under the carpet, what code executes in that uncertain error state. Try to narrow down the potential errors to specific statements, and isolate them: reduce the amount of code that is allowed to run in an error state to a bare minimum, pull the “dangerous” statements into their own function, see if there’s a way to avoid needing to handle an error in the first place. In the above code for example, the error raised here:

If Sheet1.Range("A1").Value = 42 Then

Could easily be avoided by verifying whether we’re looking at a value that can legally be compared to 42 (or any other non-Error value), like this:

Dim cellValue As Variant
cellValue = Sheet1.Range("A1").Value ' cellValue is Variant/Error

If IsNumeric(cellValue) Then ' false
  If cellValue = 42 Then ' comparison is safe in this branch
     '...
  End If
ElseIf Not IsError(cellValue) Then ' false
  'cellValue isn't an error value, but isn't numeric either
  '...
Else
  'execution branches here
  '...
End If

Of course that’s just one example… and every situation is different: if you’re reading a Recordset and one of the fields is missing, you have all rights to blow things up… but you still have to make sure you clean up the mess and close the connection properly before you exit the scope – but then consider, if you were given the opened connection as a parameter… life is much simpler: it’s not your job at all to close that connection – whoever created it will be assuming it’s still open when the failing procedure returns! The basic golden rule of thumb being that the code that’s responsible for creating an object (or invoking a factory method that creates it) should also be the code that’s responsible for destroying that object.