0

I have an application that was developed in 2001. I have a table named Locations which is being used by my application and some of my business partners to insert/update/delete data from that particular table.

This table has around 1 million rows. Performance wise, it's working fine and we have indexes or using other techniques to get the data.

Now, I want to implement full text search on it and SQL Server full text search is not fast enough and I decided to use ElasticSearch for it. For this, I need to ingest data from SQL Server into Elastic search that I can do using logstash but I'm confused about the following points:

  1. How I will update the Elastic Search Index if some new record is added?
  2. How I will update the Elastic Search Index if some record is updated?
  3. How I will update the Elastic Search Index if some record is deleted?

If we have one source to update the locations table then I can think about it but there are too many clients those can update the locations table. What should I do in this case?

2
  • You can keep track of the changes in your Locations table by some means and sync data with elasticsearch. For example stackoverflow.com/questions/45708928/… is a similar question. When we did this, we used a very complicated manually written code that use change tracking to find out the changes, your case is quite simple since you only have one table Commented May 13, 2023 at 15:55
  • Btw, 1 million rows, full text search should be fast enough or you're doing something wrong Commented May 13, 2023 at 15:57

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.