4

I'm using spring-data-mongodb and have a simple repository which is configured with the following configuration:

@Configuration
@EnableMongoRepositories(basePackages = "com.my.package")
@Profile("default")
public class MongoConfig extends AbstractMongoConfiguration {

    @Value("${mongo.db.uri}")
    private String mongoDbUri;
    @Value("${mongo.db.database}")
    private String mongoDbDatabaseName;

    @Override
    protected String getDatabaseName() {
        return mongoDbDatabaseName;
    }

    @Override
    public MongoClient mongoClient() {
        return new MongoClient(new MongoClientURI(mongoDbUri));
    }
}

The used repository extends the CrudRepository, which makes that I can call the saveAll() method. By default, doing a saveAll (bulk operation) in mongodb will stop when one record fails, unless an option was passed to the insertMany/updateMany command to have "continueOnError" to true or have "BulkMode.unordered". Is there any way I can configure spring data to always continue on error (or always do an unordered insert/update), so that doing a saveAll will always try the whole bulk, even if some records fail?

Thanks!

1
  • In an older version of the MongoDb driver it was possible to set this behaviour using WriteConcern.continueOnError(). This was however removed from the api. Commented Aug 14, 2017 at 11:44

1 Answer 1

2

To engage a MongoDB bulk write (with the ability to choose an unordered op which in turn allows the write to continue even after one of the items in the bulk group fails) you need to use org.springframework.data.mongodb.core.BulkOperations (available from Spring Data >= 1.9.0.RELEASE)

For example:

BulkOperations bulkOperations = mongoTemplate.bulkOps(BulkMode.UNORDERED, YourDomainObjectClass.class);
for (YourDomainObject ydo : yourDomainObjects) {
    bulkOperations.insert(ydo);
}
BulkWriteResult result = bulkOperations.execute();
// inspect the result
if (result.getInsertCount() != yourDomainObjects.size()) {
    // oh oh
}

If you are using Spring Data version < 1.9.0.RELEASE then you could use the native driver object: DBCollection (available via MongoOperations.getCollection()) and the go with standard driver calls such as:

BulkWriteOperation bulkWriteOperation = collection.initializeUnorderedBulkOperation();
for (YourDomainObject ydo : yourDomainObjects) {
    bulkWriteOperation.insert(ydo);    
}
BulkWriteResult result = bulkWriteOperation.execute();
// inspect the result
if (result.getInsertCount() != yourDomainObjects.size()) {
    // oh oh
}
Sign up to request clarification or add additional context in comments.

2 Comments

I'm trying to keep using the saveAll method provided by the spring-data repository and was wondering if I could configure it to always use unordered mode. With your solution, I have to implement the saveAll method myself, which is something I want to prevent. I just want to configure Spring somehow to always do an unordered bulk insert, but it might not be possible.
Yeah, that's not possible with Spring Data at the moment. I think you'll have to implement you own 'bulk aware' saveAll() with the above answer providing two approaches to that implementation.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.