The email landed in his inbox at 2:47 PM on a Friday. “Access database won’t open - URGENT!”
For a IT director at a mid-sized manufacturing firm, this wasn’t just another end-of-week crisis. This particular Access database tracked millions in customer orders dating back to 2003. It held the company’s lifeblood - and now it wouldn’t open.
This story is - to understate things considerably - not unique.
Across thousands of businesses, critical data remains locked in aging Microsoft Access databases. These databases, often created decades ago by long-departed employees, continue running critical business processes long past their intended lifespan. Like digital time bombs, they tick away until something finally breaks.
Sometimes the break comes from growth - Access databases have strict size limits that many businesses eventually hit. Other times, it’s compatibility issues with newer Windows versions or Office updates. Occasionally, file corruption strikes without warning. Whatever the cause, the result is the same: panic as employees realize they can’t access vital business data.
The signs of impending Access database problems often appear well before crisis hits. Users complain about slowdowns when multiple people try to use the database simultaneously. Random error messages pop up about “locked records” or “disk I/O errors.” Reports that used to take seconds now take minutes. Each symptom points to the same conclusion: the database has outgrown its original design.
For many organizations, these Access databases represent their first real attempt at structured data management. Built in the late 1990s or early 2000s, they solved immediate business needs - tracking inventory, managing customer orders, or maintaining employee records. They worked well enough that nobody thought about replacing them. Until now.
Today’s business environment demands more from databases. Companies need web access to their data. They need better security and access controls. They need integration with modern business intelligence tools. They need reliable backup and recovery options. Most of all, they need stability - and aging Access databases simply can’t deliver.
Migration, though, is scary.
Access files are proprietary - tools for working with them arent common, and direct conversion can challenging. Microsoft’s SQL dialect differs from standard SQL in subtle but important ways; by default, in fact, it emulates ANSI-89 SQL, which, as the name applies, dates back to 1989.
Forms and reports built into Access applications often contain business logic that needs preservation - and the full details haven often been forgotten, or known only by some employee who has long left the company.
The stakes in these migrations are high. Years of business data must move correctly, maintaining integrity and relationships. Operations can’t stop for long periods while data converts. Users need training on new systems. And through it all, the business must keep running.
This article presents a battle-tested approach to Access database migration. We’ll walk through real-world techniques using open-source tools like mdbtools and SQLAxe to methodically extract and convert both schema and data. We’ll examine common pitfalls and how to avoid them. Most importantly, we’ll show how to validate every step of the process to ensure nothing gets lost in translation.
Our method splits the conversion into manageable phases:
Schema extraction and analysis
Data export and transformation
Import preparation and testing
Production migration and validation
Each phase builds on the previous ones - and together, they form a highway that starts at legacy Access and ends with a modern MySQL database. We’ll explore each phase in detail in subsequent articles, but first, let’s understand exactly what we’re up against with Access databases and why MySQL offers a compelling alternative.
Stay tuned as we dive deep into the technical details of this critical modernization process. Whether you’re facing an immediate crisis, or simply planning ahead, this guide will help you navigate the challenges of database migration successfully.
Understanding Access
To understand why Access migrations need careful handling, we must first grasp how Access fundamentally differs from modern database systems.
Unlike MySQL or PostgreSQL, Access isn’t just a database - it’s a complete application development platform.
In fact, each .mdb or .accdb file bundles together data, queries, forms, reports, and code modules into a single package.
This bundling shaped how businesses used Access. Rather than building separate database and application layers, companies often merged them. Business logic that modern developers would put in middleware ended up scattered through Access forms and VBA modules. Data validation rules got embedded directly in table definitions. Reports hard-coded business rules about how to present information.
This isn’t necessarily a problem in all cases - in fact, it makes it easier to get started, and if your system is used by just a person or two, that may be sufficient. Despite what the pundits on LinkedIn might tell you, rapid iteration didn’t start with vibe coding in 2024 - people have been using smaller tools to get a solution working for a long, long time.
The Access file format itself poses challenges. Microsoft’s Jet database engine uses a proprietary format that few tools can read directly. Tools like mdbtools can extract some schema and data info, it’s not as simple as, say, converting a JPG to a PNG. mdbtools can’t handle all Access features.
Consider how Access handles data types. Where MySQL offers clear-cut choices like INTEGER or VARCHAR, Access uses more fluid types. A “Number” field might hold anything from a tiny integer to a massive floating-point value. Text fields automatically expand to hold whatever data users enter. This flexibility helped non-technical users build working databases - but it complicates migration.
The GUI-based nature of Access also poses difficulties; although it has SQL support for reasonable handling of NULL values, the UI makes it quite easy to conflate values like FALSE, NULL, and an empty string - though, of course, this happens in other databases as well.
Access has a significant limitation in size limits; a single Access database file can’t exceed 2 gigabytes - tiny by today’s standards. But many businesses worked around this by linking multiple databases together. These links created hidden dependencies that migration plans must account for.
Security in Access takes a distinctly Windows-centric approach. Rather than using database users and roles, many Access applications rely on Windows authentication or even hard-coded credentials. Moving to MySQL means rethinking how users and permissions work
The peculiarities of Access SQL syntax add another layer of complexity. Consider these examples:
• Access uses asterisks for wildcards where standard SQL uses percent signs
• Date literals in Access use # marks instead of quotes
• The IIF() function replaces standard CASE statements
• String concatenation uses & instead of ||
These differences mean that queries that work perfectly in Access often fail in MySQL without careful translation. The migration process must account for these syntax variations while preserving the original logic.
Access’s forms and reports present perhaps the biggest migration challenge. These often contain crucial business logic that isn’t documented anywhere else. A simple-looking report might incorporate complex calculations, conditional formatting rules, and custom VBA functions. Moving to MySQL means either rebuilding this functionality in a new application layer or carefully extracting and translating it.
Data validation provides another example of hidden complexity. Access tables can include validation rules that check data before saving. These rules often use custom VBA functions not visible in the table schema. A successful migration needs to identify these rules and implement equivalent checks in MySQL.
The way Access handles transactions differs markedly from modern databases. Where MySQL uses explicit transaction boundaries, Access often relies on implicit transactions. This can lead to unexpected behavior when moving data between systems.
Understanding these Access quirks shapes successful migration strategies. Rather than treating Access as just another database, experienced developers approach it as a legacy application platform. They know that extracting and converting the data is just the beginning - the real work lies in understanding and preserving the business logic buried within the Access application.
This deeper understanding guides technical decisions throughout the migration process. It helps developers choose appropriate tools, plan for data transformation needs, and identify potential pitfalls before they become problems.
Remember: successful Access migration requires more than just moving data. It demands a thorough understanding of how Access works and how businesses have used it. Only with this foundation can we build reliable migration paths to modern database platforms.
Converting Access Database Schemas
When faced with an aging Microsoft Access database, the first step toward modernization involves extracting and converting its structural blueprint - the schema. This critical phase lays the groundwork for all subsequent migration steps, much like surveying land before beginning construction.
The schema defines how data fits together - which fields exist, how they relate, and what rules govern them. In Access databases, this information lives inside the .mdb file itself, written in Microsoft’s proprietary format. Getting it out requires specialized tools and careful attention to detail.
Think of schema extraction like archeology - we’re carefully unearthing artifacts that must be preserved and translated. One wrong move could compromise the entire dig. That’s why we start with basic extraction before attempting any conversion.
The basic tool we’ll use in this article is called “mdbtools” - it’s an open-source utility that can read Microsoft’s proprietary database format. While not perfect, it provides a reliable starting point for understanding how the database is structured.
Let’s look at a basic script that begins this extraction process:
# Basic conversion test script
# Usage: ./mdbconvert.sh source.mdb
if [ "$#" -ne 1 ]; then
echo "Usage: $0 source.mdb"
exit 1
fi
SOURCE_DB="$1"WORK_DIR="output"
SCHEMA_FILE="$WORK_DIR/schema.sql"
MYSQL_SCHEMA="$WORK_DIR/mysql_schema.sql"
# Create working directory
mkdir -p "$WORK_DIR"
# Extract schema
echo "Extracting schema..."mdb-schema "$SOURCE_DB" mysql > "$SCHEMA_FILE"
echo "Initial schema conversion complete. Results in $WORK_DIR"
Let’s break this script down piece by piece:
The first section handles basic script setup. The shebang line (#!/bin/bash) tells Unix-like systems this is a bash script. The usage check ensures someone provided a database file to work with - without it, the script would fail in confusing ways.
if [ "$#" -ne 1 ]; then
echo "Usage: $0 source.mdb"
exit 1
fi
This block counts command-line arguments. If there isn’t exactly one argument (the database file), it shows how to use the script correctly and exits. This kind of error checking prevents frustrating troubleshooting sessions later.
Next comes variable setup:
SOURCE_DB="$1"
WORK_DIR="output"
SCHEMA_FILE="$WORK_DIR/schema.sql"
MYSQL_SCHEMA="$WORK_DIR/mysql_schema.sql"
These lines store important file paths we’ll use later. Using variables instead of hard-coded paths makes the script more flexible and easier to modify. The $1 captures whatever database file name the user provided.
The working directory creation follows:
mkdir -p "$WORK_DIR"
This creates a fresh output directory for our work. The -p flag means “create parent directories if needed” and “don’t complain if the directory already exists.” This prevents errors if someone runs the script multiple times.
Finally, we reach the actual schema extraction:
echo "Extracting schema..."
mdb-schema "$SOURCE_DB" mysql > "$SCHEMA_FILE"
This command does the heavy lifting. It tells mdbtools to read the Access database and output its structure in a format roughly compatible with MySQL. The > operator redirects this output to our schema file instead of showing it on screen.
The script ends by confirming completion and telling the user where to find the results. This might seem minor, but clear feedback prevents confusion about whether the script worked.
While this script provides a starting point, it’s important to understand its limitations. The extracted schema will need cleanup before use with MySQL. Access and MySQL handle many things differently:
- Data types don’t map perfectly between systems
- Default values may use incompatible syntax
- Index names might need adjustment
- Some constraints may need manual translation
- Character encoding specifications might differ
Think of this initial extraction as creating a rough draft that needs editing. The next phase will involve carefully reviewing and adjusting the schema to work properly with MySQL while preserving the original database’s integrity.
Warning signs that your extracted schema needs attention include:
- References to Access-specific functions
- Unusual character encodings
- Complex default values
- Identity columns
- Timestamp handling
- Text fields without specified lengths
Successful schema conversion requires both technical knowledge and business understanding. Tools can handle the basic translation, but you’ll need to make informed decisions about how to handle Access-specific features that don’t have direct MySQL equivalents.
Remember: rushing through schema conversion to get to data migration often leads to problems that are much harder to fix later. Take the time to get this foundation right, and the rest of the migration will go much more smoothly.
The next steps involve detailed analysis of the extracted schema, making necessary adjustments, and validating that the converted structure will support your business needs in MySQL. We’ll explore those processes in subsequent articles.
In the meantime, save this script and run it against a test database. Examine the output carefully - it will give you valuable insights into the complexity of your upcoming migration project.
Converting Access Data to MySQL
The hardest part of moving from Access to MySQL isn’t the schema - it’s the data itself. While database structures can be rewritten if needed, data must move perfectly. Every record, every field value, every relationship needs to arrive intact in the new system. Mistakes here can break critical business processes or lead to wrong decisions based on corrupt data.
Consider a manufacturing company’s order database. A single wrong decimal point could mean shipping 100 items instead of 10. A character encoding problem might turn customer names into gibberish. A broken relationship between tables might disconnect orders from their customers entirely. The stakes are high, and the margin for error is zero.
That’s why we take an incremental, carefully validated approach to data migration. Rather than attempting to move everything at once, we break the process into smaller, manageable steps. Each step produces output we can check before moving forward.
Our starting point is getting the data out of Access and into a format we can work with. CSV files provide an ideal intermediate format - they’re simple, human-readable, and supported by virtually every database system. But even this apparently straightforward process requires careful handling.
Let’s look at a script that handles the initial data export:
#!/bin/bash
# Data export script for Access to MySQL migration
# Usage: ./export-data.sh source.mdb
if [ "$#" -ne 1 ]; then
echo "Usage: $0 source.mdb"
exit 1
fi
SOURCE_DB="$1"
WORK_DIR="data_export_$(date +%Y%m%d_%H%M%S)"
TABLES_FILE="$WORK_DIR/tables.txt"
LOG_FILE="$WORK_DIR/export.log"
# Create working directory
mkdir -p "$WORK_DIR"
# Get list of tablesecho "Identifying tables..."
mdb-tables "$SOURCE_DB" > "$TABLES_FILE"
# Export each table
while read -r table; do
echo "Exporting $table..."
mdb-export "$SOURCE_DB" "$table" > "$WORK_DIR/${table}.csv" 2>> "$LOG_FILE"
done < "$TABLES_FILE"
echo "Export complete. Check $LOG_FILE for any errors."
Let’s break this down piece by piece to understand exactly what’s happening and why each step matters.
First, we set up basic error checking:
if [ "$#" -ne 1 ]; then
echo "Usage: $0 source.mdb"
exit 1
fi
As before, this block ensures someone provided a database file to work with. Without this check, the script might try to process a non-existent file, leading to confusing errors later. The -ne operator means “not equal” - if the number of arguments isn’t exactly one, show usage instructions and exit.
Next comes our setup section:
SOURCE_DB="$1"
WORK_DIR="data_export_$(date +%Y%m%d_%H%M%S)"
TABLES_FILE="$WORK_DIR/tables.txt"
LOG_FILE="$WORK_DIR/export.log"
Here we’re creating variables to hold important file paths. Notice how we include a timestamp in the working directory name - this prevents overwriting previous exports and provides an audit trail. The $1 represents the database file name provided on the command line.
The directory creation follows:
mkdir -p "$WORK_DIR"
This creates a fresh directory for our exported files. The -p flag means “create parent directories if needed” and prevents errors if the directory already exists. This seemingly simple step prevents several potential failure modes.
Now we get to the core of our export process:
echo "Identifying tables..."
mdb-tables "$SOURCE_DB" > "$TABLES_FILE"
This command asks mdbtools to list all tables in the database. We save this list to a file because we’ll need it for the next step. Having this separate file also lets us check what tables existed in the original database if questions arise later.
Finally, we reach the main export loop:
while read -r table; do
echo "Exporting $table..."
mdb-export "$SOURCE_DB" "$table" > "$WORK_DIR/${table}.csv" 2>> "$LOG_FILE"
done < "$TABLES_FILE"
This section reads our table list one line at a time and exports each table to a separate CSV file. The 2>> operator redirects any error messages to our log file, appending rather than overwriting. This gives us a complete record of any problems encountered during export.
After running this script, you’ll have a directory containing:- A CSV file for each table in your database- A list of all tables that existed in the original database- A log file noting any errors or warnings during export
But our work isn’t done. These CSV files need careful examination before we try importing them into MySQL. Common issues to watch for include:
Character encoding problems might show up as strange symbols in text fields. Date formats might need adjustment to match MySQL’s expectations. Number formatting - especially for decimal values - needs verification. Some fields might contain embedded commas or quotes that could confuse CSV parsing.
You’ll want to examine a sample of records from each table, paying special attention to:- Text fields that might contain special characters- Date/time values- Decimal numbers- Fields that commonly contain unusual values in your data
Remember: this export script is just the first step in your data migration journey. The CSV files it produces provide a snapshot of your data in a format you can examine and modify if needed. Take time to verify these files thoroughly before moving on to the import phase.
Validating Data During Access to MySQL Migration
Let’s say that manufacturing company mentioned earlier went forward with their migration. Later, they discovered thousands of customer records had vanished during their database migration. Orders disappeared. Customer histories evaporated. What seemed like a successful conversion had actually corrupted critical business data.
This nightmare scenario happens more often than anyone would like to admit. Database migrations fail silently. IDs get accidentally regenerated. Customer names acquire strange characters. Relationships between tables break. Yet the database appears to work normally - until someone needs that specific data that didn’t quite make it across.
That’s why thorough data validation stands as the most critical step in any database migration. It’s not enough to simply move the data - we must verify every record arrived intact and maintains its proper relationships. This validation process requires both automated checks and careful human oversight.
Think of data validation like quality control in manufacturing. Just as a factory tests sample products throughout production, we need to test our migrated data at every step. We check not just that records exist, but that they contain the right information in the right format with the right connections to other data.
Let’s look at a basic validation script that helps catch common migration issues:
#!/bin/bash
# Data validation script for Access to MySQL migration
# Usage: ./validate-data.sh source.mdb target_mysql_db
if [ "$#" -ne 2 ]; then
echo "Usage: $0 source.mdb target_mysql_db"
exit 1
fi
SOURCE_DB="$1"
TARGET_DB="$2"
WORK_DIR="validation_$(date +%Y%m%d_%H%M%S)"
REPORT_FILE="$WORK_DIR/validation_report.txt"
# Create working directory
mkdir -p "$WORK_DIR"
# Get list of tables
echo "Identifying tables..."
mdb-tables "$SOURCE_DB" > "$WORK_DIR/tables.txt"
# Check each table
while read -r table; do
echo "Validating $table..."
# Compare record counts
access_count=$(mdb-export "$SOURCE_DB" "$table" | wc -l)
mysql_count=$(mysql -N -e "SELECT COUNT(*) FROM $table" "$TARGET_DB")
echo "Table: $table" >> "$REPORT_FILE"
echo "Access records: $access_count" >> "$REPORT_FILE"
echo "MySQL records: $mysql_count" >> "$REPORT_FILE"
echo "-------------------" >> "$REPORT_FILE"
# Sample record comparison
echo "Sample records from $table:" >> "$REPORT_FILE"
mdb-export "$SOURCE_DB" "$table" | head -n 5 >> "$REPORT_FILE"
echo "MySQL samples:" >> "$REPORT_FILE"
mysql -N -e "SELECT * FROM $table LIMIT 5" "$TARGET_DB" >> "$REPORT_FILE"
echo "===================" >> "$REPORT_FILE"
done < "$WORK_DIR/tables.txt"
echo "Validation complete. Check $REPORT_FILE for results."
Let’s break down this script to understand exactly what it does and why each step matters.
The script begins with basic error checking - ensuring someone provided both a source Access database and target MySQL database name. Without both pieces, validation becomes impossible. The script uses clear error messages to help users understand what went wrong:
if [ "$#" -ne 2 ]; then
echo "Usage: $0 source.mdb target_mysql_db"
exit 1
fi
Next comes setup. The script creates a uniquely named working directory using the current timestamp. This prevents overwriting previous validation results and provides an audit trail. All validation outputs go to this directory:
SOURCE_DB="$1"
TARGET_DB="$2"
WORK_DIR="validation_$(date +%Y%m%d_%H%M%S)"
REPORT_FILE="$WORK_DIR/validation_report.txt"
mkdir -p "$WORK_DIR"
The actual validation process starts by identifying all tables in the Access database. We save this list to a file for reference:
echo "Identifying tables..."
mdb-tables "$SOURCE_DB" > "$WORK_DIR/tables.txt"
The heart of the validation occurs in the main loop. For each table, we:
- Count records in both databases
- Compare these counts
- Extract sample records from both systems4. Save all this information to our report file
Here’s our code to do just that:
while read -r table; do
echo "Validating $table..."
access_count=$(mdb-export "$SOURCE_DB" "$table" | wc -l)
mysql_count=$(mysql -N -e "SELECT COUNT(*) FROM $table" "$TARGET_DB")
The record counting deserves special attention. For the Access database, we export to CSV and count lines. For MySQL, we use a direct COUNT(*) query. If these numbers don’t match, we know immediately something went wrong during migration.
The script then saves sample records from both databases:
echo "Sample records from $table:" >> "$REPORT_FILE"
mdb-export "$SOURCE_DB" "$table" | head -n 5 >> "$REPORT_FILE"
echo "MySQL samples:" >> "$REPORT_FILE"
mysql -N -e "SELECT * FROM $table LIMIT 5" "$TARGET_DB" >> "$REPORT_FILE"
These samples prove invaluable for spotting subtle conversion issues like character encoding problems or decimal errors.
While this script provides a good starting point, thorough validation requires additional manual checks:
- Review relationships between tables
- Verify complex data types converted correctly
- Check business logic still works
- Test critical queries and reports
- Validate security settings and permissions
- Ensure performance meets requirements
Remember: automated validation catches some classes of problems, but human review remains essential - its the only way you’ll be able to spot more subtle issues that could impact business operations. Take time to thoroughly examine the validation results before considering your migration complete.
Common pitfalls to watch for include:
- Truncated text fields
- Rounded decimal values
- Corrupted special characters
- Broken relationships between tables
- Missing indexes or constraints
- Changed date formats
- Missing tables
The validation process might feel tedious, but it prevents costly problems down the road. Every hour spent validating data saves many more hours fixing issues after they’ve impacted business operations. Think of it as insurance - a small investment that protects against potentially massive losses.
Finally, keep your validation results. They provide valuable documentation of the migration process and can help troubleshoot any issues that surface later. Consider them part of your system documentation, proof that the migration succeeded and the data arrived intact in its new home.# The Production Migration: Moving from Access to MySQL
Actual Migration
The final step of database migration - moving to production - carries the highest stakes. Unlike testing, where mistakes can be fixed quietly, production migration happens in full view of users who need their data working perfectly. There’s no room for error.
Think of production migration like moving a business to a new building. You can plan extensively, but eventually you must actually move everything while keeping operations running. The process requires careful orchestration, clear communication, and backup plans for when things go wrong.
Let’s examine a script that handles this critical phase:
# Production migration script for Access to MySQL
# Usage: ./migrate-production.sh source.mdb target_mysql_db
if [ "$#" -ne 2 ]; then
echo "Usage: $0 source.mdb target_mysql_db"
exit 1
fi
SOURCE_DB="$1"
TARGET_DB="$2"
WORK_DIR="production_migration_$(date +%Y%m%d_%H%M%S)"
LOG_FILE="$WORK_DIR/migration.log"
BACKUP_DIR="$WORK_DIR/backup"
# Create working directories
mkdir -p "$WORK_DIR"
mkdir -p "$BACKUP_DIR"
# Log start time
echo "Migration started at $(date)" >> "$LOG_FILE"
# Backup existing MySQL databaseecho "Creating backup of current MySQL database..."
mysqldump "$TARGET_DB" > "$BACKUP_DIR/${TARGET_DB}_backup.sql" 2>> "$LOG_FILE"
# Get list of tables in order of dependencies
echo "Analyzing table dependencies..."
mdb-tables -1 "$SOURCE_DB" | sort > "$WORK_DIR/tables_ordered.txt"
# Export and import each tablewhile read -r table; do
echo "Processing $table..."
# Export to CSV
echo "Exporting $table to CSV..."
mdb-export "$SOURCE_DB" "$table" > "$WORK_DIR/${table}.csv" 2>> "$LOG_FILE"
# Clear existing data
echo "Clearing existing data from $table..."
mysql -e "DELETE FROM $table" "$TARGET_DB" 2>> "$LOG_FILE"
# Import new data
echo "Importing new data into $table..."
mysql -e "LOAD DATA INFILE '$WORK_DIR/${table}.csv'
INTO TABLE $table
FIELDS TERMINATED BY ','
ENCLOSED BY '\"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES" "$TARGET_DB" 2>> "$LOG_FILE"
# Verify record count
orig_count=$(wc -l < "$WORK_DIR/${table}.csv")
new_count=$(mysql -N -e "SELECT COUNT(*) FROM $table" "$TARGET_DB")
echo "Table: $table" >> "$LOG_FILE"
echo "Original records: $orig_count" >> "$LOG_FILE"
echo "Imported records: $new_count" >> "$LOG_FILE"
echo "-------------------" >> "$LOG_FILE"
done < "$WORK_DIR/tables_ordered.txt"
# Log completion
echo "Migration completed at $(date)" >> "$LOG_FILE"
echo "Migration complete. Check $LOG_FILE for details."
Let’s break down this script section by section to understand exactly what it does and why each step matters.
First, the script ensures proper usage:
if [ "$#" -ne 2 ]; then
echo "Usage: $0 source.mdb target_mysql_db"
exit 1
fi```
Next comes the setup phase:
```bash
SOURCE_DB="$1"
TARGET_DB="$2"
WORK_DIR="production_migration_$(date +%Y%m%d_%H%M%S)"
LOG_FILE="$WORK_DIR/migration.log"BACKUP_DIR="$WORK_DIR/backup"
mkdir -p "$WORK_DIR"mkdir -p "$BACKUP_DIR"
As previously, this section creates a uniquely named working directory using the current timestamp. This prevents any conflict with previous migration attempts and provides a clear audit trail. The script also creates a separate backup directory - a crucial safety net if something goes wrong.
The backup phase follows:
echo "Creating backup of current MySQL database..."
mysqldump "$TARGET_DB" > "$BACKUP_DIR/${TARGET_DB}_backup.sql" 2>> "$LOG_FILE"
This critical step saves the current state of the MySQL database before making any changes. If problems arise during migration, this backup allows us to restore the system to its previous state. You should (almost) never skip this step, no matter how confident you feel about the migration.
The table analysis section comes next:
echo "Analyzing table dependencies..."
mdb-tables -1 "$SOURCE_DB" | sort > "$WORK_DIR/tables_ordered.txt"
This command lists all tables in the Access database. The -1 option puts each table name on its own line, making the output easier to process. While this simple sort doesn’t handle complex dependencies, it provides a consistent order for processing tables.
The main migration loop forms the heart of the script:
while read -r table; do
echo "Processing $table..."
# Export to CSV
echo "Exporting $table to CSV..."
mdb-export "$SOURCE_DB" "$table" > "$WORK_DIR/${table}.csv" 2>> "$LOG_FILE"
For each table, the script first exports data to CSV format. This intermediate step allows us to verify the data before import and provides a backup of the original data.
The data import section follows:
# Clear existing data
echo "Clearing existing data from $table..."
mysql -e "DELETE FROM $table" "$TARGET_DB" 2>> "$LOG_FILE"
# Import new data
echo "Importing new data into $table..."
mysql -e "LOAD DATA INFILE '$WORK_DIR/${table}.csv'
INTO TABLE $table
FIELDS TERMINATED BY ','
ENCLOSED BY '\"'
LINES TERMINATED BY '\n'
IGNORE 1 LINES" "$TARGET_DB" 2>> "$LOG_FILE"
This block handles the actual data transfer. It first clears any existing data from the target table, then loads the new data from our CSV file. The LOAD DATA INFILE command provides the fastest way to bulk load data into MySQL. Finally, the verification step:
# Verify record count
orig_count=$(wc -l < "$WORK_DIR/${table}.csv")
new_count=$(mysql -N -e "SELECT COUNT(*) FROM $table" "$TARGET_DB")
echo "Table: $table" >> "$LOG_FILE"
echo "Original records: $orig_count" >> "$LOG_FILE"
echo "Imported records: $new_count" >> "$LOG_FILE"
echo "-------------------" >> "$LOG_FILE"```
This section counts records in both the source CSV and target table, logging the results. This immediate verification helps catch any data loss during import.
The script concludes with completion logging:
```bash
echo "Migration completed at $(date)" >> "$LOG_FILE"
echo "Migration complete. Check $LOG_FILE for details."
These final messages provide a clear record of when the migration finished and remind users to check the log file for details.
While this script handles the technical aspects of migration, successful production migration requires more than just code. Consider these crucial factors:
Communication: Users need clear information about migration timing and any required downtime. Set expectations early and provide regular updates during the process.
Timing: Choose a migration window that minimizes business impact. Late nights or weekends often work best, but ensure support staff will be available if needed.
Testing: Run through the entire migration process multiple times in a test environment. Time these test runs to estimate how long production migration will take.
Backup: Beyond the script’s database backup, ensure you have backups of everything related to the database - including any applications that use it.
Validation: Plan thorough testing immediately after migration. Have key users ready to verify critical functionality works correctly.
Rollback Plan: Despite best efforts, things can go wrong. Have a detailed plan for reverting to the old system if necessary.
Remember: production migration represents the culmination of your conversion project. Take time to plan carefully, test thoroughly, and prepare for contingencies. The effort invested in preparation pays off in a smooth, successful migration that keeps your business running without disruption.
Finally, note that in this article, we’ve only covered converting the data side of Microsoft Access - its also an application generator, and migrating that part of a Microsoft Access application will be covered in a future article.