We currently run two different versions of Team Foundation Server in our environment. Our setup includes an older TFS instance alongside a newer one, and we want to consolidate timesheet reporting across both systems.
Right now we’re using a third-party timesheet solution that only connects to one TFS database at a time. Since we won’t be consolidating our TFS installations anytime soon, I’m wondering if there’s a way to modify the database queries to fetch work item data from both TFS databases simultaneously.
Has anyone tried something similar? Would creating a custom stored procedure that pulls from multiple TFS warehouse databases work, or are there potential issues I should be aware of? Any suggestions would be helpful.
Cross-version TFS database queries are a nightmare. Schema differences will drive you crazy, and you’ll waste tons of time babysitting stored procedures.
I faced this exact problem with TFS 2013 and TFS 2018 running side by side. Skip the cross-database headaches - build an automated sync that pulls timesheet data through REST APIs instead.
This approach rocks because you normalize data formats on the fly, handle different work item schemas automatically, and never touch TFS databases directly. Set it once, schedule it, done.
Latenode makes this dead simple. Create workflows connecting both TFS instances, merge data however you want, and push everything to your reporting system. Zero custom code, zero database pain.
I’ve used it for similar multi-system setups and it handles TFS version differences like a champ. Error handling and monitoring come free.
I’ve been through this exact scenario with TFS 2010 and 2015 - don’t go with cross-database stored procedures. The warehouse schemas are completely different between versions, especially how they handle custom fields and project hierarchies. We tried this first and hit constant data type conflicts and referential integrity problems that cost us a fortune to maintain. Here’s what worked: we built a dedicated integration service using TFS client libraries to connect to both instances separately. The service runs scheduled jobs to pull timesheet-relevant work item data, transforms everything into a common format, then loads it into a separate reporting database for the timesheet tool. Way better error handling and logging than database-level integration, plus zero risk of corrupting production TFS data. Performance was solid for our reporting needs and scaled perfectly when we added a third TFS instance.
we built a simple middleware app that hits both TFS instances through their web services. much easier than touching database schemas directly - no risk of breaking anything. takes about 30 minutes to set up basic data pulling and merge it into whatever format u need for reporting.
Direct database integration between different TFS versions is a maintenance nightmare that most orgs don’t see coming. I worked on a project where we tried cross-database stored procedures between TFS 2012 and 2017 - the compatibility issues ate up way more dev time than we expected. The warehouse database structures are completely different between versions, especially around work item linking and custom field storage. What looks like a simple query solution turns into a mess when you’re dealing with version-specific data types and relationship mappings. We had better luck with scheduled PowerShell scripts. We pulled timesheet data using TFS object model APIs from both instances, then merged everything into a dedicated reporting database. This killed the direct database dependencies while still giving us the consolidated view the timesheet solution needed. Performance was fine for our reporting, and maintenance became predictable instead of constantly reacting to TFS schema changes.
I faced a similar challenge a while back when we were running TFS 2015 alongside another version. While cross-database stored procedures are an option, I found the constant schema discrepancies difficult to manage over time, particularly with how work item types can differ significantly. Instead, we opted for a custom ETL (Extract, Transform, Load) process that allowed us to pull the necessary data from both systems into a common staging area. This provided us with more flexibility for data normalization and reporting, and ultimately proved far more effective than trying to integrate directly through stored procedures.