I've got an Datapoint[] file = new Datapoint[2592000] array. This array is filled with timestamps and random values. Creating them costs me like 2s. But in another function prepareData(); I'm preparing 240 Values for another Array TempBuffer.
In the prepareData() function I'm searching for matching values in the file array. If I can't find any I take the timestamp and set the value to 0 else I'm taking the found value + same timestamp.
The function looks like this:
public void prepareData()
{
stopWatch.Reset();
stopWatch.Start();
Int32 unixTimestamp = (Int32)(DateTime.UtcNow.Subtract(new DateTime(1970, 1, 1))).TotalSeconds;
for (double i = unixTimestamp; unixTimestamp - 240 < i; i--)
{
bool exists = true;
if (exists != (Array.Exists(file, element => element.XValue == i)))
{
TempBuffer = TempBuffer.Skip(1).Concat(new DataPoint[] { new DataPoint(UnixTODateTime(i).ToOADate(), 0) }).ToArray();
}
else
{
DataPoint point = Array.Find(file, element => element.XValue == i);
TempBuffer = TempBuffer.Skip(1).Concat(new DataPoint[] { new DataPoint(UnixTODateTime(i).ToOADate(), point.YValues) }).ToArray();
}
}
stopWatch.Stop();
TimeSpan ts = stopWatch.Elapsed;
}
Now the problem is with this amount of data in the file (2'592'000) the function needs like 40 seconds! With smaller amounts like 10'000 it's not problem and working fine and fast. But as soon as I set the file size to my prefered 2'592'000 points the CPU is pushed to 99% Performance and the function needs way too long.
TempBuffer Sample Value:
X = Converted UnixTimeStamp to DateTime and DateTime Converted To AODate
{X=43285.611087963, Y=23}
File Sample Value:
X = Unixtimestamp
{X=1530698090, Y=24}
It's important that the tempbuffer values are converted into AODate since the data inside the tempbuffer array is displayed in a mschart.
Is there a way to improve my code so I've got better performance?
TempBufferto a list before your loop. Since you do a lot of mutations, they would be faster on a list than having to create multiple new arrays every iteration.