Introduction
Problem description
My data has been processed, but does not appear in processed data, processing has failed but there is no error message in the logs:
Foot filter enabled.
Device body hits scan filtering enabled.
Device head hits scan filtering enabled.
Minimum distance filtering enabled. Minimum Distance: 0.4 m.
Maximum distance filtering enabled. Maximum Distance: 30 m.
Generating pointcloud...
Processing with 8 threads!
Finished!
***************************************************************************
*** ***
*** Filtering point cloud *** 
Scope
Applies to: Cloud processing
Troubleshooting procedure
Cause: The Dataset is too large, it exceeds the time limit (36 hours)
Check the logs from the cloud. Scroll to where the license and data integrity section begins and check the very last line above it.
Look for the number next to the last reindexed bag (in this case 97). This shows how many minutes were recorded in the dataset (plus 1 because it starts from 0).
This example failed the requirement of having datasets under 60 minutes and our system is not created to support it. Even if it didn't run out of time, it would most likely run out of RAM.
...reindexed_bagfiles/bag_laser_vert_97.bag 100% 37.2 MB 00:03
***************************************************************************
*** ***
*** Verifying license and data integrity ***
*** 2022-04-21 00:55:28 ***
***************************************************************************Check the cloud processing monitoring channel and find the processing task that failed. On the example above, it would be this entry:
If you see that the Failed reason is "Job attempt duration exceeded timeout", this is the matching cause to the situation.
Solution
The solution is to split this dataset into two parts and process them individually.
Manually align them if no control points were used. While setting up the processing task you will have an option to do so:

Place the slider of the first processing task to start from zero and end at 55% of the entire dataset duration.
Start a new processing task, select the same dataset and place the slider to start from 45% of the entire dataset duration till the end of it.
The 10% overlap will make it easier to align the data aster processing.