I’m working on a PHP script that queries multiple tables and I’m hitting a snag when processing some of them. Here’s the scenario:
$main_query = "SELECT * FROM schema_info WHERE table_name LIKE '%settings_lang'";
$result = db_execute($main_query);
while ($row = fetch_row($result)) {
$sub_query = "SELECT setting_value FROM {$row['schema_name']}.{$table_prefix}settings WHERE setting_key='DB_VERSION'";
try {
$sub_result = db_execute($sub_query);
$sub_row = fetch_row($sub_result);
// Additional operations
} catch (Exception $e) {
continue;
}
}
The issue is that certain tables trigger a ‘table not found’ error (1932) during the inner query, even though I’ve wrapped it in a try-catch block. This error causes the subsequent call to fetch data to fail rather than skipping the faulty table.
What could be causing the error handling to break, and how can I adjust my code to bypass these faulty tables while continuing with others?
I’ve encountered similar issues when working with dynamic queries across multiple schemas. One approach that’s worked well for me is to use PDO with error mode set to silent, then explicitly check for errors after each query.
Here’s a modified version of your code that might help:
$pdo = new PDO($dsn, $user, $password, [PDO::ATTR_ERRMODE => PDO::ERRMODE_SILENT]);
$main_query = "SELECT * FROM schema_info WHERE table_name LIKE '%settings_lang'";
$result = $pdo->query($main_query);
while ($row = $result->fetch(PDO::FETCH_ASSOC)) {
$sub_query = "SELECT setting_value FROM {$row['schema_name']}.{$table_prefix}settings WHERE setting_key='DB_VERSION'";
$sub_result = $pdo->query($sub_query);
if ($sub_result === false) {
// Log error or handle as needed
continue;
}
$sub_row = $sub_result->fetch(PDO::FETCH_ASSOC);
// Additional operations
}
This method allows you to catch and handle errors without breaking the execution flow. It’s been reliable in my experience with cross-schema queries.
As someone who’s dealt with similar multi-schema database operations, I can share a technique that’s worked well for me. Instead of relying solely on try-catch, I’ve found it effective to implement a custom error handler. Here’s how you could modify your approach:
set_error_handler(function($errno, $errstr, $errfile, $errline) {
throw new ErrorException($errstr, 0, $errno, $errfile, $errline);
});
$main_query = "SELECT * FROM schema_info WHERE table_name LIKE '%settings_lang'";
$result = db_execute($main_query);
while ($row = fetch_row($result)) {
$sub_query = "SELECT setting_value FROM {$row['schema_name']}.{$table_prefix}settings WHERE setting_key='DB_VERSION'";
try {
$sub_result = db_execute($sub_query);
$sub_row = fetch_row($sub_result);
// Additional operations
} catch (ErrorException $e) {
// Log the error or handle it as needed
error_log("Error in sub-query: " . $e->getMessage());
continue;
}
}
restore_error_handler();
This approach catches both exceptions and errors, giving you more control over error handling without disrupting the script’s flow. It’s been a game-changer for me when working with complex database operations across multiple schemas.
I’ve tackled similar challenges in my work with multi-schema databases. One approach that’s proven effective for me is to implement a custom error handler combined with a more robust query execution method. Here’s a snippet that illustrates this strategy:
function customErrorHandler($errno, $errstr, $errfile, $errline) {
throw new ErrorException($errstr, 0, $errno, $errfile, $errline);
}
set_error_handler('customErrorHandler');
function safeQueryExecute($connection, $query) {
try {
$result = db_execute($query);
return $result ? $result : false;
} catch (ErrorException $e) {
error_log("Query execution error: " . $e->getMessage());
return false;
}
}
$main_query = "SELECT * FROM schema_info WHERE table_name LIKE '%settings_lang'";
$result = safeQueryExecute($connection, $main_query);
if ($result) {
while ($row = fetch_row($result)) {
$sub_query = "SELECT setting_value FROM {$row['schema_name']}.{$table_prefix}settings WHERE setting_key='DB_VERSION'";
$sub_result = safeQueryExecute($connection, $sub_query);
if ($sub_result) {
$sub_row = fetch_row($sub_result);
// Additional operations
}
}
}
restore_error_handler();
This approach has consistently allowed me to handle errors gracefully without interrupting the script’s flow, even when dealing with problematic tables across multiple schemas.
I’ve dealt with similar issues in multi-schema environments. One effective approach I’ve used is to combine error handling with a more robust query execution method. Here’s a technique that’s worked well for me:
function safeQueryExecute($pdo, $query) {
$stmt = $pdo->prepare($query);
if (!$stmt) {
return false;
}
$result = $stmt->execute();
return $result ? $stmt : false;
}
$main_query = "SELECT * FROM schema_info WHERE table_name LIKE '%settings_lang'";
$result = safeQueryExecute($pdo, $main_query);
if ($result) {
while ($row = $result->fetch(PDO::FETCH_ASSOC)) {
$sub_query = "SELECT setting_value FROM {$row['schema_name']}.{$table_prefix}settings WHERE setting_key='DB_VERSION'";
$sub_result = safeQueryExecute($pdo, $sub_query);
if ($sub_result) {
$sub_row = $sub_result->fetch(PDO::FETCH_ASSOC);
// Additional operations
} else {
// Log error or handle as needed
error_log("Error executing sub-query: " . implode(', ', $pdo->errorInfo()));
}
}
}
This method provides better control over query execution and error handling, allowing you to gracefully skip problematic tables without disrupting the overall process.