I am getting a bit of confusion when it comes to getting the run-time of a function I have that gathers data and stores it for me. I have written code for a sensor connected to a raspberry pi zero and a relevant question and information for that can be found here: Direct reading from i2c
Anyways from timing it manually to time stamping using the code bellow as well as get_time() functionality;
time_t rawtime;
struct tm *info;
time( &rawtime );
info = localtime( &rawtime );
fprintf(new_file, asctime(info));
//Code goes here
time( &rawtime );
info = localtime( &rawtime );
fprintf(new_file, asctime(info));
I have consistently been able to time the data transfer rate and been getting roughly the same values (1 - 1.5KHz).
Now this time I tried a different approach as can be seen bellow;
struct timespec t1, t2;
double elapsedTime;
// start timer
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &t1);
start = clock();
read_MPU(new_file);
end = clock();
time_taken = end - start;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &t2);
// compute and print the elapsed time in millisec
elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0;
elapsedTime += (t2.tv_nsec - t1.tv_nsec) / 1000000.0;
printf("%.3f ms elapsed\n", elapsedTime);
time_taken = (((double)time_taken)/CLOCKS_PER_SEC)*1000;
printf("thread took: %f milli seconds\n", time_taken);
Here I am using two methods for verification, though they both output roughly the same time; ~2500 milliseconds Here is the code for the function running;
void* read_MPU(FILE *new_file){
counter = 0;
while(counter<10000){
buf[0] = 0x3B;
write(fd,buf,1);
read(fd,buf,14);
for(i=0;i<14;i++){
fprintf(new_file,"%.0f,",(float)buf[i]);
}fprintf(new_file, "%s\n","");
counter++;
}
}
Now I am passing in this function 17 bytes of data plus ack bits = 153 bits over I2C, which would mean for the times I am getting here that the data transfer rate is around 676433 Hz. The reason this makes no sense is I am running the setup on a pi zero and it's clock is set to 400KHz. As well as the fact of the massive discrepancy between timing methods as mentioned previously the actual time it takes is far greater than that (around 8-9 seconds) from alternate methods used previously and timing manually.
If someone could please explain what I am missing here, I really need to understand this clearly and need to figure this out. It seems the farther I dive into this issue the less it makes sense.
EDIT: What I am thinking is the answer here is that since I am using a file descriptor I am simply running my code over it and reading data faster than the actual I2C process takes place, so in fact I am not waiting for the data to update. This however does not make total sense, since I am not running much else in my program and the full ~10 seconds it takes for it to execute does not add up since the function I am running (read_MPU) is the most laborious in the program.
Full executing code;
int main(int argc, char **argv){
clock_t start, end;
double time_taken;
int bus;
if (argc > 1) bus = atoi(argv[1]);
else bus = 1;
sprintf(buf, "/dev/i2c-%d", bus);
if ((fd = open(buf, O_RDWR)) < 0){
// Open port for reading and writing on the i2c bus
fprintf(stderr, "Failed to open i2c bus /dev/i2c-%d\n", bus);
exit(1);
}
select_i2c_Device(fd, MPU_ADDRS, "MPU6050");
MPU6050_init();
FILE *new_file = create_file();
struct timespec t1, t2;
double elapsedTime;
// start timer
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &t1);
start = clock();
read_MPU(new_file);
end = clock();
time_taken = end - start;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &t2);
// compute and print the elapsed time in millisec
elapsedTime = (t2.tv_sec - t1.tv_sec) * 1000.0;
elapsedTime += (t2.tv_nsec - t1.tv_nsec) / 1000000.0;
printf("%.3f ms elapsed\n", elapsedTime);
time_taken = (((double)time_taken)/CLOCKS_PER_SEC)*1000;
printf("thread took: %f milli seconds\n", time_taken);
fclose(new_file);
return 0;
}