Computer Vision Development for Estimating Trunk Angles

Posted on by Menekse S. Barim, PhD, AEP; Robert G. Radwin, PhD; and Ming-Lun (Jack) Lu, PhD, CPE

 

Work-related musculoskeletal disorders have been linked to many physical job risk factors, such as forceful movement, repetitive exertions, awkward posture and vibration. These job risk factors are typically evaluated using ergonomic risk assessment methods or tools. These methods are predominantly self-reporting and observational. Self-reporting methods can be questionnaires, checklists or interviews. Observational methods entail observing pre-defined risk factors using ergonomics checklists during a site visit or on a video recording of the job. The observational methods are widely used by professional ergonomists and safety professionals in the field. The accuracy of the methods, however, is affected by the subjective nature of the evaluations and could potentially lead to biased risk interpretations. As a consequence, inaccurate results of observational ergonomic risk assessments may lead to ineffective interventions for reducing job risk factors for work-related musculoskeletal disorders.

In addition, current assessment methods are incapable of measuring risk over long periods. Representative sampling information over longer periods of time is critical for determining accurate total risk exposure. Computer vision, an AI technology, is a promising tool for conducting risk assessment. Computer vision uses a computer instead of a human observer to identify human body posture, motion and hand activity. This technology has become very popular for classifying postures for ergonomic applications. With the advantages of low-cost, ease of use, and real-time posture assessments, it is likely to reduce burden and cost associated with current ergonomic risk analysis tools.

Researchers have used computer vision to classify lifting postures using dimensions of boxes drawn tightly around the subject, called bounding boxes[i] . Earlier studies have demonstrated lifting monitor algorithms [ii] and posture classification using computer vision extracted features [iii]. A current study by NIOSH researchers and University of Wisconsin-Madison collaborators, is exploring the application of measuring simple features of bounding box dimensions obtained using computer vision to predict the sagittal trunk angle (the center of hips to the center of the shoulders), a predictor of low-back disorders [iv], [v]. This new computer vision-based risk assessment method simplifies measuring trunk angles and quantifies ergonomic risk assessments over long period of times in industry settings.

Trunk angles are not currently part of commonly used lifting assessment tools because they are difficult to measure [vi].  The new method does not require high-precision posture measurements, so it is more tolerable of the visual conditions encountered in the industrial setting.

To develop the computer vision-based method, the trunk flexion angle was modeled by a training dataset consisting of 105 computer generated lifting postures with different horizontal and vertical locations of the hands. Researchers drew a bounding box (shown in figure) tightly around the subject for each training-set case, measured the height and width, and recorded the torso angle of each case. Image B shows how the computer vision uses the lifting monitor algorithm to process the video (image A). A rectangular bounding box encloses the subject’s body motion, including the locations of the hands and the ankles. The hands and ankles are identified by green asterisks [vii].

The model was validated using 180 lifting tasks performed by 5 subjects in a laboratory setting. The trunk flexion angles for the lifting tasks were measured by a research grade motion capture system and used as the gold standard for the validation test. The mean absolute difference between predicted and motion capture measured trunk angles was 15.85º, and there was a linear relationship between predicted and measured trunk angles (R2=0.80, p<0.001). The error in measuring trunk angle was 2.52 º.

This study demonstrated the feasibility of predicting trunk flexion angles for lifting tasks in a video recording using computer vision algorithms without a human observer’s input. This computer vision-based lifting risk assessment method may be used as a non-intrusive, automatic and practical risk assessment tool in many workplace settings. The method has the ability to automatically collect risk data over extended periods, which may serve as an efficient way of assessing lifting risks for large scale field studies. Ergonomists may also benefit from the continuous risk information for devising and prioritizing interventions. We plan on refining the computer algorithms for estimating other lifting risk factors such as trunk symmetry angle, and other lifting risk variables used by the revised NIOSH lifting equation.

We would like to hear from you regarding the potential applications of this computer vision-based lifting risk assessment method in your workplace.

 

Menekse S. Barim, PhD, AEP, is a Research Industrial Engineer in the NIOSH Division of Field Studies and Engineering.

Robert G. Radwin, PhD, is the Duane H. and Dorothy M. Bluemke Professor in the College of Engineering at the University of Wisconsin-Madison.

Ming-Lun (Jack) Lu, PhD, CPE, is a Research Ergonomist in the NIOSH Division of Field Studies and Engineering and Manager of the NIOSH Musculoskeletal Health Cross-Sector Program.

 

References

[i] Greene, R. L., Lu, M.-L., Barim, M. S., Wang, X., Hayden, M., Hen Hu, Y., & Radwin, R. G. (2019). Estimating Trunk Angles During Lifting Using Computer Vision Bounding Boxes *. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63(1), 1128–1129.

[ii] Wang, X., Hu, Y.H., Lu, M.-L., Radwin, R.G. (2019). The accuracy of a 2D video-based lifting monitor, Ergonomics,62(8),1043-1054.

[iii] Greene, R. L., Hu, Y. H., Difranco, N., Wang, X., Lu, M. L., Bao, S., … & Radwin, R. G. (2019). Predicting Sagittal Plane Lifting Postures From Image Bounding Box Dimensions. Human factors, 61(1), 64-77.

[iv] Marras, W. S., Lavender, S. A., Leurgans, S. E., Rajulu, S. L., Allread, S. W. G., Fathallah, F. A., & Ferguson, S. A. (1993). The role of dynamic three-dimensional trunk motion in occupationally-related. Spine, 18(5), 617-628.Patrizi, A., Pennestrì, E., & Valentini, P. P. (2016). Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics. Ergonomics, 59(1), 155-162.

[v] Lavender, S. A., Andersson, G. B., Schipplein, O. D., & Fuentes, H. J. (2003). The effects of initial lifting height, load magnitude, and lifting speed on the peak dynamic L5/S1 moments. Int. J. Industrial Ergonomics, 31(1), 51-59.

[vi] Patrizi, A., Pennestri, E., Valentini, P.P (2016). Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics, Ergonomics, 2016, pp/ 155-162.

[vii] Greene RL, Lu M-L, Barim MS, et al. Estimating Trunk Angle Kinematics During Lifting Using a Computationally Efficient Computer Vision Method. Human Factors. September 2020. doi:10.1177/0018720820958840

 

Posted on by Menekse S. Barim, PhD, AEP; Robert G. Radwin, PhD; and Ming-Lun (Jack) Lu, PhD, CPE

11 comments on “Computer Vision Development for Estimating Trunk Angles”

Comments listed below are posted by individuals not associated with CDC, unless otherwise stated. These comments do not represent the official views of CDC, and CDC does not guarantee that any information posted by individuals on this site is correct, and disclaims any liability for any loss or damage resulting from reliance on any such information. Read more about our comment policy ».

    This is nice if all that had to do with the risk of injury was angles, the amount and strength of the individuals muscle mass has more to do with the risk of injury than any angle out there…

    Thank you for your comment on our blog. We understand that personal factors play a role in the development of musculoskeletal disorders. For ergonomic assessments in the workplace, job-related risk factors are the focus of our study because those are the factors that can be controlled by employers in designing the tasks. The worker’s trunk flexion angle for work is typically dictated by the job design.

    The ultimate goal of the computer vision based ergonomic risk assessments is to implement a working program on either a portable computer or mobile phone. We will post an update on this blog when such an application becomes available.

    I used to work lifting commodities into industrial plastic bins perfectly arranged on 1ton trucks. Each truck carried 6 bins in total which we had to get filled. I managed to lift nearly 20 tons on a daily basis. Those who experienced back pain or any other injuries didn’t perform lifting postures well or scarcely had any adequate warm-up exercise. I applaud this achievement of yours and am hoping many people will benefit from it in the near future. People in developing countries work really hard, without PPE or even guidelines to prevent injuries and so on.
    I really appreciate the fact that you are working to the advancement of humanity by means of creating safer work environments.
    Warm regards to the team from Jalisco, Mexico.

    Thank you for your comment. Our goal is to develop practical direct reading technologies for evaluating the risk of low back disorders resulting from manual lifting. The new computer vision technology may help managers and workers identify risks of low back problems quickly and automatically. The risk information can be used for targeting or prioritizing actions for reducing the risk of back injuries

    I believe that two different people performing the same task could have a marked difference in trunk angles due to body mechanics. Would you agree and, if so, what does it mean for the computer model? Would the model recognize a floor to waist lift of an object performed two different ways – one way bent knees, lordotic spine curve maintained vs. another way straight legs and bent waist? Job design is the same but approach varies between individuals. Which one would have a higher risk or would they both be the same risk? Very interesting work and I see much potential for workplace job analysis using computer vision.

    Thank you for your comment. We agree that workers can show different trunk flexion angles for the same task due to different lifting techniques. Our computer vision based posture classification technology uses a bounding box drawn tightly around the worker. The bounding box is capable of distinguishing different lifting postures such as stooping, squatting and standing. A special posture classification algorithm called Classification and Regression Tree was employed for classifying the different lifting postures using the dimensions of the bounding box. This laboratory-based study showed that the model misclassified an overall 3.33% of the lifts (0 of 10 squat lifts, 1 of 10 stoop lifts and 0 of 10 standing lifts) from the side view. You may find additional information in a published 2019 paper titled “Predicting sagittal plane lifting postures from image bounding box dimensions” in the journal Human Factors.

Post a Comment

Leave a Reply to Jeff Vardis Cancel Reply

Your email address will not be published. Required fields are marked *

All comments posted become a part of the public domain, and users are responsible for their comments. This is a moderated site and your comments will be reviewed before they are posted. Read more about our comment policy »

Page last reviewed: December 9, 2020
Page last updated: December 9, 2020