WWW.BOOK.DISLIB.INFO
FREE ELECTRONIC LIBRARY - Books, dissertations, abstract
 
<< HOME
CONTACTS



Pages:     | 1 | 2 ||

«Abstract. While recent advances in computer vision have provided reliable methods to recognize actions in both images and videos, the problem of ...»

-- [ Page 3 ] --

In order to quantitatively evaluate our feedback method, we needed to acquire ground truth annotations. We consulted with the MIT diving team coach who watched a subset of the videos in our dataset (27 in total) and provided suggestions on how to improve the dive. The diving coach gave us specific feedback (such as “move left foot down”) as well as high-level feedback (e.g., “legs should be straight here” or “tuck arms more”). We translated each feedback from the coach into one of three classes, referring to whether the diver should adjust his upper body, his lower body, or maintain the same pose on each frame. Due to the subjective nature of the task, the diving coach was not able to provide more detailed feedback annotations. Hence, the feedback is coarsely mapped into these three classes.

We then evaluate our feedback as a detection problem. We consider a feedback proposal from our algorithm as correct if it suggests to move a body part within a one second range of the coach making the same suggestion. We use the magnitude of the feedback gradient as the importance of the feedback proposal.

12 Pirsiavash, Vondrick, Torralba Fig. 8: Figure Skating Feedback Proposals: We show feedback for some of the figure skaters where the red vectors are instructions for the figure skaters.

Fig. 9: Feedback Limitations: The feedback we generate is not perfect. If the figure skater or diver were to rely completely on the feedback above, they may fall over. Our model does not factor in physical laws, motivating work in support inference [37, 38].

We use a leave-one-out approach where we predict feedback on a video heldout from training. Our feedback proposals obtain 53.18% AP overall for diving, compared to 27% AP chance level. We compute chance by randomly generating feedback that uniformly chooses between the upper body and lower body.

Since our action quality assessment model is not aware of physical laws, the feedback suggestions can be physically implausible. Fig.9 shows a few cases where if the performer listened to our feedback, they might fall over. Our method’s lack of physical models motivates work in support inference [37, 38].

Interestingly, by averaging the feedback across all divers in our dataset, we can find the most common feedback produced by our model. Fig.10 shows the magnitude of feedback for each frame and each joint averaged over all divers. For visualization proposes, we warp all videos to have the same length. Most of the feedback suggests correcting the feet and hands, and the most important frames turn out to be the initial jump off the diving board, the zenith of the dive, and the moment right before the diver enters the water.

Assessing the Quality of Actions 13

–  –  –

Fig. 10: Visualizing Common Feedback: We visualize the average feedback magnitude across the entire diving dataset for each joint and frame. Red means high feedback and blue means low feedback. The top and right edges show marginals over frames and joints respectively. R and L stand for right and left respectively, and U and D stand for upper and lower body, respectively. Feet are the most common area for feedback on Olympic divers, and that the beginning and end of the dive are the most important time points.

4.5 Highlighting Impact We qualitatively analyze the video highlights produced by finding the segments that contributed the most to the final quality score. We believe that this measure can be useful for video summarization since it reveals, out of a long video, which clips are the most important for the action quality. We computed impact on a routine from the figure skating dataset in Fig.11. Notice when the impact is near zero, the figure skater is in a standard, up-right position, or in-between maneuvers. The points of maximum impact correspond to jumps and twists of the figure skater, which contributes positively to the score if the skater performs it correctly, and negatively otherwise.

4.6 Discussion If quality assessment is a subjective task, is it reasonable for a machine to still obtain reasonable results? Remarkably, the independent Olympic judges agree with each other 96% of the time, which suggests that there is some underlying structure in the data. One hypothesis to explain this correlation is that the judges are following a complex system of rules to gauge the score. If so, then the job of a machine quality assessment system is to extract these rules. While the approach in this paper attempts to learn these rules, we are still a long way from high performance on this task.

5 Conclusions

Assessing the quality of actions is an important problem with many real-world applications in health care, sports and search. To enable these applications, we have introduced a general learning-based framework to automatically assess an 14 Pirsiavash, Vondrick, Torralba (a) (b) Fig. 11: Video Highlights: By calculating the impact each frame has on the score of the video, we can summarize long videos with the segments that have the largest impact on the quality score. Notice how, above, when the impact is close to zero, the skater is usually in an upright standard position, and when the impact is large, the skater is performing a maneuver.

action’s quality from videos as well as to provide feedback for how the performer can improve. We evaluated our system on a dataset of Olympic divers and figure skaters, and we show that our approach is significantly better at assessing an action’s quality than a non-expert human. Although the quality of an action is a subjective measure, the independent Olympic judges have a large correlation.





This implies that there is a well defined underlying rule that a computer vision system should be able to learn from data. Our hope is that this paper will motivate more work in this relatively unexplored area.

Assessing the Quality of Actions 15 Acknowledgments: We thank Zoya Bylinkskii and Sudeep Pillai for comments and the MIT diving team for their helpful feedback. Funding was provided by a NSF GRFP to CV and a Google research award and ONR MURI N000141010933 to AT.

References

1. Gordon, A.S.: Automated video assessment of human performance. In: AI-ED.

(1995)

2. Jug, M., Perˇ, J., Deˇman, B., Kovaˇiˇ, S.: Trajectory based assessment of coors z cc dinated human activity. Springer (2003)

3. Perˇe, M., Kristan, M., Perˇ, J., Kovacic, S.: Automatic Evaluation of Organized s s Basketball Activity using Bayesian Networks. Citeseer (2007)

4. Pirsiavash, H., Ramanan, D.: Detecting activities of daily living in first-person camera views. In: CVPR. (2012)

5. Ke, Y., Tang, X., Jing, F.: The design of high-level features for photo quality assessment. In: CVPR. (2006)

6. Gygli, M., Grabner, H., Riemenschneider, H., Nater, F., Van Gool, L.: The interestingness of images. (2013)

7. Datta, R., Joshi, D., Li, J., Wang, J.Z.: Studying aesthetics in photographic images using a computational approach. In: ECCV. (2006)

8. Dhar, S., Ordonez, V., Berg, T.L.: High level describable attributes for predicting aesthetics and interestingness. In: CVPR. (2011)

9. Gupta, A., Kembhavi, A., Davis, L.S.: Observing human-object interactions: Using spatial and functional compatibility for recognition. PAMI (2009)

10. Yao, B., Fei-Fei, L.: Action recognition with exemplar based 2.5d graph matching.

In: ECCV. (2012)

11. Yang, W., Wang, Y., Mori, G.: Recognizing human actions from still images with latent poses. In: CVPR. (2010)

12. Maji, S., Bourdev, L., Malik, J.: Action recognition from a distributed representation of pose and appearance. In: CVPR. (2011)

13. Delaitre, V., Sivic, J., Laptev, I., et al.: Learning person-object interactions for action recognition in still images. In: NIPS. (2011)

14. Laptev, I., Perez, P.: Retrieving actions in movies. In: ICCV. (2007)

15. Sadanand, S., Corso, J.J.: Action bank: A high-level representation of activity in video. In: CVPR. (2012)

16. Rodriguez, M., Ahmed, J., Shah, M.: Action mach a spatio-temporal maximum average correlation height filter for action recognition. In: CVPR. (2008) 1–8

17. Efros, A., Berg, A., Mori, G., Malik, J.: Recognizing action at a distance. In:

CVPR. (2003)

18. Shechtman, E., Irani, M.: Space-time behavior based correlation. In: IEEE PAMI.

(2007)

19. Poppe, R.: A survey on vision-based human action recognition. Image and Vision Computing 28(6) (2010) 976–990

20. Aggarwal, J.K., Ryoo, M.S.: Human activity analysis: A review. ACM Comput.

Surv. 16

21. Wang, H., Ullah, M.M., Klaser, A., Laptev, I., Schmid, C.: Evaluation of local spatio-temporal features for action recognition. In: BMVC. (2009) 16 Pirsiavash, Vondrick, Torralba

22. Niebles, J., Chen, C., Fei-Fei, L.: Modeling temporal structure of decomposable motion segments for activity classification. ECCV (2010)

23. Laptev, I.: On space-time interest points. ICCV (2005)

24. Le, Q.V., Zou, W.Y., Yeung, S.Y., Ng, A.Y.: Learning hierarchical invariant spatiotemporal features for action recognition with independent subspace analysis. In:

CVPR. (2011)

25. Wang, J., Liu, Z., Wu, Y., Yuan, J.: Mining actionlet ensemble for action recognition with depth cameras. In: CVPR. (2012)

26. Ekin, A., Tekalp, A.M., Mehrotra, R.: Automatic soccer video analysis and summarization. Transactions on Image Processing (2003)

27. Khosla, A., Hamid, R., Lin, C.J., Sundaresan, N.: Large-scale video summarization using web-image priors. In: CVPR. (2013)

28. Gong, Y., Liu, X.: Video summarization using singular value decomposition. In:

CVPR. (2000)

29. Rav-Acha, A., Pritch, Y., Peleg, S.: Making a long video short: Dynamic video synopsis. In: CVPR. (2006)

30. Ngo, C.W., Ma, Y.F., Zhang, H.J.: Video summarization and scene detection by graph modeling. Circuits and Systems for Video Technology (2005)

31. Jiang, R.M., Sadka, A.H., Crookes, D.: Hierarchical video summarization in reference subspace. Consumer Electronics, IEEE Transactions on (2009)

32. Marszalek, M., Laptev, I., Schmid, C.: Actions in context. In: CVPR. (2009)

33. Yang, Y., Ramanan, D.: Articulated pose estimation with flexible mixtures-ofparts. In: CVPR. (2011)

34. Park, D., Ramanan, D.: N-best maximal decoders for part models. In: ICCV.

(2011)

35. Drucker, H., Burges, C.J., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. NIPS (1997)

36. Chang, C.C., Lin, C.J.: Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST) (2011)

37. Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from rgbd images. In: ECCV. (2012)

38. Zheng, B., Zhao, Y., Yu, J.C., Ikeuchi, K., Zhu, S.C.: Detecting potential falling objects by inferring human action and natural disturbance. In: IEEE Int. Conf.

on Robotics and Automation (ICRA)(to appear). (2014)



Pages:     | 1 | 2 ||


Similar works:

«Ferris State University DIVISION OF STUDENT AFFAIRS 2013-2014 Assessment Highlights 8/11/2014 TABLE OF CONTENTS INTRODUCTION Admissions (Events) Assessment Area (1 of 2): Admitted Student Open House Assessment Area (2 of 2): Dawg Days Admissions (Process) Assessment Area (1 of 1): Auto-Admit Feature in Banner Admissions (Recruitment) Assessment Area (1 of 3): Daily Visits Assessment Area (2 of 3): Target X/Salesforce Assessment Area (3 of 3): Virtual Videos Birkam Health Center (BHC) Assessment...»

«Strukturelle und biochemische Charakterisierung des cytosolischen C-Terminus von Polycystin-2 und seine Interaktion mit dem Zytoskelett-assoziierten Protein mammalian diaphanous homolog 1 DISSERTATION ZUR ERLANGUNG DES DOKTORGRADES DER NATURWISSENSCHAFTEN (DR. RER. NAT.) DER FAKULTÄT FÜR BIOLOGIE UND VORKLINISCHE MEDIZIN DER UNIVERSITÄT REGENSBURG vorgelegt von Maren Eberhardt, geb. Schmidt aus Berlin-Kreuzberg November 2011 Das Promotionsgesuch wurde eingereicht am: 21.11.2011 Das...»

«Aus dem Fachbereich Medizin der Johann Wolfgang Goethe-Universität Frankfurt am Main Zentrum der Kinderheilkunde Klinik für Kinderkardiologie Direktor: Prof. Dr. med. Roland Hofstetter Therapie des Neonatalen Entzugssyndroms mit Clonidin und Chloralhydrat Dissertation zur Erlangung des Doktorgrades der Medizin des Fachbereichs Medizin der Johann Wolfgang Goethe-Universität Frankfurt am Main vorgelegt von Anne Kathrin Keinhorst aus Bochum Frankfurt am Main 2010 Therapie des Neonatalen...»

«Aus dem Centrum für Muskuloskeletale Chirurgie der Medizinischen Fakultät Charité – Universitätsmedizin Berlin DISSERTATION Retrospektive Analyse von Prognosefaktoren, muskuloskeletaler Funktion und postoperativer Lebensqualität nach Resektion von Weichgewebssarkomen der Extremitäten zur Erlangung des akademischen Grades Doctor medicinae (Dr. med.) vorgelegt der Medizinischen Fakultät Charité – Universitätsmedizin Berlin von Mirko Smolny aus Gifhorn Gutachter: 1. Priv.-Doz. Dr....»

«UK Standards for Microbiology Investigations Investigation of intravascular cannulae and associated specimens Issued by the Standards Unit, Microbiology Services, PHE Bacteriology | B 20 | Issue no: 6 | Issue date: 30.11.15 | Page: 1 of 24 © Crown copyright 2015 Investigation of intravascular cannulae and associated specimens Acknowledgments UK Standards for Microbiology Investigations (SMIs) are developed under the auspices of Public Health England (PHE) working in partnership with the...»

«Survey Conducted by the National Council on Aging’s (NCOA) National Institute for Senior Centers, Improving Health Team With guidance from NCOA’s Center for Healthy Aging January 2016 Purpose The purpose of this survey was to assess the evidence based programs senior centers currently offer, as well as their successes, challenges, and barriers to providing evidence based programs. Additionally, the survey captured other health programming that centers offer and their relationship with...»

«University of Veterinary Medicine Hannover The influence of motion on spatial contrast sensitivity in budgerigars (Melopsittacus undulatus) Thesis Submitted in partial fulfilment of the requirements for the degree -Doctor of Veterinary MedicineDoctor medicinae veterinariae ( Dr. med. vet. ) by Nicola Kristin Haller Eutin Hannover 2014 Academic supervision: 1. Prof. Dr. Almut Kelber Department of Biology Lund University Sweden 2. Prof. Dr. Stephan Steinlechner Department of Zoology School of...»

«Aus dem Fachbereich Medizin der Johann Wolfgang Goethe-Universität Frankfurt am Main, Klinik für Anästhesiologie, Intensivmedizin und Schmerztherapie (KAIS) Direktor: Prof. Dr. Dr. Kai Zacharowski, FRCA Kommunikationsund Patientenübergabemanagement in der Notfallmedizin Dissertation zur Erlangung des Doktorgrades der Medizin des Fachbereichs Medizin der Johann Wolfgang Goethe-Universität Frankfurt am Main vorgelegt von Rainer Peter Waßmer aus Bühl Frankfurt am Main, 2009 Dekan: Prof. Dr....»

«Aus dem Institut für Klinische Radiologie der Ludwig-Maximilians-Universität München Direktor: Prof. Dr. med. Reiser Hochauflösende Magnetresonanztomographie der hyalinen Wirbelkörperknorpelplatte und angrenzender Strukturen: Pathoanatomische und histopathologische Korrelation Dissertation zum Erwerb des Doktorgrades der Medizin an der Medizinischen Fakultät der Ludwig-Maximilians-Universität zu München vorgelegt von Felix Jahn aus Traunstein Mit Genehmigung der Medizinischen Fakultät...»

«SAS Global Forum 2009 Pharma, Life Sciences and Healthcare Paper 168-2009 Using SAS® Clinical Data Integration Server to Implement and Manage CDISC Standards Michael Kilhullen, SAS Institute Inc., Cary, NC ABSTRACT The SAS® Metadata Server is a core component of all SAS®9 solutions. It delivers the power to integrate, share, centrally manage, and leverage metadata across entire organizations. Through these capabilities, standard data models such as the CDISC Study Data Tabulation Model...»





 
<<  HOME   |    CONTACTS
2016 www.book.dislib.info - Free e-library - Books, dissertations, abstract

Materials of this site are available for review, all rights belong to their respective owners.
If you do not agree with the fact that your material is placed on this site, please, email us, we will within 1-2 business days delete him.