Handling Large Data Files and Streamlining Database Synchronization in Xano

The meeting among the State Changers was focused on the problem of synchronizing a large data file (around 77MB, half a million records) with a database. The main difficulty was that the file was too large to handle, leading to memory issues when trying to process it. The State Changers decided to use Convert API to convert the data type and unpack the file, which was then loaded into Xano.


The State Changers initially tried using the CSV decoder built into Xano to process the large text file, but this resulted in memory problems. The solution was to break down the task manually, splitting the text file line by line and then pulling out the header fields. They discovered some data corruption problems approximately 4000 records in, prompting a need for an alternative strategy. Suggestions included creating an API endpoint in Xano that could take in data subsets and process and store them individually. By dividing the tasks, the memory issues were mitigated as each new API call started from zero memory pile-up. The plan was to consolidate a subset of records, send it to the API endpoint for processing and storage, then clear the subset variable to avoid memory overflow. The subset size (or the 'batch' size) could be adjusted for performance optimization. The remaining details of the execution, specifically how to iterate and match the row data to keys for database storage, remained to be finalized. The main takeaway of the meeting was the creation of a robust strategy to efficiently handle large data files by breaking them down into manageable subsets, reducing memory load and bypassing data corruption issues. Key technology mentioned: Convert API, Xano, databases, CSV decoder, memory problem, API endpoint, data synchronization.


(Source: Office Hours 3/6 )

State Change Members Can View The Video Here
chris-montgomery-smgTvepind4-unsplash.jpg

View This Video Now

Join State Change Risk-Free