It's hard to answer your question with a definitive answer as so much information is missing and a lot, depends also on the other UI screens data that you need to fill or that other may have opened. You’re going to get issues when performing aggregates on the same table that you provide row data on as you might end-up in a deadlock, even with yourself. I'm going to give some basic data that goes through my mind, if you already know this, no harm done, just skip over it.
1 Transaction scope:
Say you have a transaction summing over the charges and payments to get a balance, you also would like to return the last 10 charges as well as the last 10 payments. We all know that there is no guarantee that charges and payments sum-up you will get into partial payments.
Then users will use a invoice reference number in the payments so you need to book a payment to that reference even if there are older charges that would incur late fees (this may actually be a business case for some companies as they charge more).
You're may not be able to guarantee that the table is partitioned in a way that would facilitate transactional isolation levels to that detail so you will likely need to set your connections to allow for a snapshot isolation level (this will hit your tempdb with snapshot data so size for that) as you have no or little control over the TSQL generated by EF. If you can think of the data that you your using as ViewModel class in MVC that that would help, you do not need take the whole table into memory when you a limited subset. You will have less issues when you’re updating the data that another user might also be updating
public class FooViewConfiguration : EntityTypeConfiguration<FooView>
{
public FooViewConfiguration()
{
this.HasKey(t => t.Id);
this.ToTable("dbo.myView");
}
…
}
You will find that you’re users are getting optimistic concurrency exceptions (you never get them as you’re the only one changing the data in your database) so you’re data scope needs to deal with that. Have a look at
using (var context = new PaymentTransactionContext())
{
var payment = context.Payments.Find(1);
payment.PayedByName = "client Name";
payment.PaymentDate = UTCNow;
bool saveFailed;
do
{
saveFailed = false;
try
{
context.SaveChanges();
}
catch (DbUpdateConcurrencyException ex)
{
saveFailed = true;
// Update the values of the entity that failed to save from the store
ex.Entries.Single().Reload();
}
} while (saveFailed);
}
And only save that where you have a change.
using (var context = new MyContext())
{
context.ChangeTracker.TrackGraph(customer, e =>
{
e.Entry.State = EntityState.Unchanged;
if((e.Entry.Entity as Invoice) != null)
{
if (e.Entry.IsKeySet)
{
e.Entry.State = EntityState.Modified;
}
else
{
e.Entry.State = EntityState.Added;
}
}
});
foreach (var entry in context.ChangeTracker.Entries())
{
Console.WriteLine("Entity: {0}, State: {1}", entry.Entity.GetType().Name, entry.State.ToString());
}
context.SaveChanges();
}
You limit this by using an (indexed) view and control the locking using TSQL, you can update data in a view either direct if only one table is involved or by using "instead of" triggers when you join several tables.
2 Indexing and locks: You might have some and you might find that your indexes are disabled (they get broken on ETL import errors) or those that you need are missing you will usually end-up with table locks as the dbms has alternative as the only option is a table scan or index scan (yes, both are bad).
Adding to many indexes will really slow down the inserts as they need to be maintained as well. Depending on the size and quantity of an index that might get out of hand. (once worked on a project where I had to drop all indexes, then do a backup and re-create the indexes in order to backup and maintain the recovery time objectives).
Also indexes, and covering indexes will widen the scope of the data as this needs to be locked as well.
What one ends-up doing is design a database that has indexes on join fields first. The way you limit the scope and speed up your table is by "removing unrelated data" as early as possible, the smaller the scope the smaller the lock the less disk I.O. SQL server executes the plan after a given time, might not be the best plan, but you do not want to wait 2 seconds for him to figure it out (that's why you can pre-compile plans and attach them). Usually what I end-up doing is make 2 connections/ contexts. I normally have a database connection with a read-only intend. And one with a read-write intend. This way will save yourself quite a few locking issues and note that this is what makes working with large databases so slow, you quickly end up locking too much and read too much. The read-only will do most of the heavy lifting and cause little or no issues as the intend and transaction levels help.
3 Database are for storing data, they are not OO: Sql server does support graph, and you would get some kick-ass performance however EF doesn’t as far as I know. So, if getting a bigger database I advice to not use OO design from the class as a mirror on the SQL tables, it’s not going to perform, go to the 6th nominal form and start duplicating data. Or create a physical table for all 3 using a 1-1 optional relationship (a foreign key that is disabled for query plans) and a trigger for data integrity
Also, I think that Proposal is a perhaps a ChargeTrasnaction that has a flag of being accepted… could this be
I think, try having a customer with relationships on "view model"/"unit of work" scoped data using ICollection for all relations and really limit what you load using the above mentioned option.
You can always use EF and SqlCommand and use the best of both worlds when ever you need.