Skip to main content

Main menu

  • Home
  • Content
    • Current Issue
    • Online Ahead of Print
    • Past Issues
  • Info for
    • Authors
      • Resources
    • Institutions
    • Advertisers
    • Subscribing
  • About
    • About the JDE
    • Editorial Board
  • More
    • Alerts
    • My Saved Searches
    • Feedback
    • Help
  • Other Publications

User menu

  • My Alerts
  • Log In
  • My Cart

Search

  • Advanced search
Journal of Dental Education
Visit the American Dental Education Association's main website
  • My Alerts
  • Log In
  • My Cart
Journal of Dental Education

Advanced Search

  • Home
  • Content
    • Current Issue
    • Online Ahead of Print
    • Past Issues
  • Info for
    • Authors
    • Institutions
    • Advertisers
    • Subscribing
  • About
    • About the JDE
    • Editorial Board
  • More
    • Alerts
    • My Saved Searches
    • Feedback
    • Help
  • Visit jde Template on Facebook
  • Follow jde Template on Twitter
  • Follow jde Template on YouTube
  • View jde RSS feed
  • TOC Alerts
Research ArticleAssessment

Types of Feedback in Competency-Based Predoctoral Orthodontics: Effects on Students’ Attitudes and Confidence

Mitchell J. Lipp, Kiyoung Cho and Han Suk Kim
Journal of Dental Education May 2017, 81 (5) 582-589; DOI: https://doi.org/10.21815/JDE.016.021
Mitchell J. Lipp
Dr. Lipp is Clinical Associate Professor and Director of Predoctoral Orthodontics, Department of Orthodontics, New York University College of Dentistry; Mr. Cho is a second-year dental student, New York University College of Dentistry; and Dr. Kim was a dental student, New York University College of Dentistry at the time of the study and graduated in 2016.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: mitchell.lipp@nyu.edu
Kiyoung Cho
Dr. Lipp is Clinical Associate Professor and Director of Predoctoral Orthodontics, Department of Orthodontics, New York University College of Dentistry; Mr. Cho is a second-year dental student, New York University College of Dentistry; and Dr. Kim was a dental student, New York University College of Dentistry at the time of the study and graduated in 2016.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Han Suk Kim
Dr. Lipp is Clinical Associate Professor and Director of Predoctoral Orthodontics, Department of Orthodontics, New York University College of Dentistry; Mr. Cho is a second-year dental student, New York University College of Dentistry; and Dr. Kim was a dental student, New York University College of Dentistry at the time of the study and graduated in 2016.
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site

GTranslate

English French German Italian Portuguese Russian Spanish
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Feedback can exert a powerful influence on learning and achievement although its effect varies. The aim of this study was to investigate the effects of three types of feedback on dental students’ attitudes and confidence in a competency-based course in predoctoral orthodontics at New York University College of Dentistry. In 2013–14, all 253 third-year students in a course using test-enhanced instructional methods received written feedback on formative assessments. The type of feedback varied across three groups: pass/fail grades (PF) N=77, emoticons (EM) N=90, or written comments (WC) N=86. At the end of the course, students completed surveys that included four statements addressing their attitudes toward course instruction and confidence in their abilities. The survey response rate ranged from 75% to 100% among groups. The lowest response rate (75%) was in the PF group. In attitudes toward course instruction and confidence in their abilities, the WC group trended to more positive responses than the other groups, while the PF group trended to negative responses. On two of the four statements, the trend for the WC group was significant (95% CI). In both statements concerning attitudes toward instruction, the PF group trended to negative responses that were significant (95% CI). These results support the effectiveness of descriptive written comments over pass/fail grades or emoticons in improving dental students’ confidence in their abilities and their attitudes toward instruction.

  • dental education
  • assessment
  • educational measurement
  • teaching feedback
  • competency based education
  • attitude
  • orthodontics

This study continues an investigation in which evidence-based methods of assessment and instruction reported in the education and psychology literature are applied to a predoctoral dental course assessing competence in management of patients with malocclusion and skeletal problems.1–3 In this course, assessment emulates the thinking activities of the practitioner: constructing problem lists, treatment objectives, and treatment plans based on patient records (radiographs, intraoral and facial photographs, and history). Students demonstrate competence relative to defined success criteria, the basis for evaluation. Modifications in assessment and instructional methods have been ongoing. In 2012, a test-enhanced method of instruction based on formative assessments with feedback was introduced. When compared to a traditional classroom approach (presentations and in-class exercises), the test-enhanced method generally boosted performance (i.e., higher grades), while not demonstrably affecting pass rates.3 Although test-enhanced methods (in areas not related to competency-based dental education) have been reported to improve performance, they also deflate students’ confidence compared to traditional studying.4,5 Since students’ confidence is related to success and satisfaction, practitioners of test-enhanced instruction need to consider how to address this issue.4

Feedback fosters learning when information given to the learner is related to a task and fills a gap between what is done and the goal. If the learner is committed to improvement, thereby reducing the gap, the learner will try again, and feedback information is looped back until performance standards (goals) are achieved. Hattie and Timperley suggested a framework for delivering feedback as a response to three questions: 1) where are you going? (what are the goals or performance expectations?); 2) how’s it going? (feedback on performance-identifying gaps relative to the goal); and 3) where to next? (feeding forward, guiding, and advancing the learner to the next step closer to the goal).6 In a synthesis of 1,200 meta-analyses, Hattie reported effect sizes on student achievement demonstrating the powerful influence of feedback on learning.7 Even though feedback is generally accepted as a powerful tool in enhancing learning, there are controversies. Most studies have found support for the idea that comments on performance improve learning,6,8–10 but Ende in 1983 argued that feedback should not include performance information at all to avoid damaging self-efficacy of the learner.11 Contrary to commonly held beliefs, not all types of feedback (grades, praise, criticism) benefit learning.6,11,12

In contrast to the prior study dealing with objective indicators of student performance,1 this study considers students’ subjective perceptions of instruction and their abilities. The aim of this study was to investigate the effects of three types of feedback on dental students’ attitudes and confidence in a competency-based course in predoctoral orthodontics at New York University (NYU) College of Dentistry.

Methods

The study was deemed exempt from oversight by the NYU Institutional Review Board (#13-9723). Study design was based on cluster random sampling. Three groups of students enrolled in D3 orthodontics seminars at NYU College of Dentistry in 2013-14 (N=253, all third-year students) took a series of formative assessments based on clinical simulation cases. All groups received the same instruction taught by the same instructor. Students constructed a problem list, treatment objectives, and treatment plan for each assessment.

All groups received the same formative assessments and instructions. Assessments were graded relative to the same evaluative (success) criteria, but each group was given a different type of written feedback. Group PF (N=77) received grades of Pass or Fail; Group EM (N=90) received emoticons, e.g., smiling, straight, or frowning faces; and Group WC (N=86) received short written comments on their performance (Table 1).

View this table:
  • View inline
  • View popup
Table 1

Types of written feedback used in reviewing formative assessments

At the end of the course, students completed 13-item surveys that included four statements about their attitudes toward course instruction and confidence in their abilities. Two statements concerned attitudes toward instruction: 1) “I would grade this course” (response options: A, B, C, D, F); and 2) “This course was” (response options were awful, below average, average, very good, or outstanding). The two other statements concerned students’ confidence in their abilities: 1) “I can construct problem list, treatment objectives, and management plan”; and 2) “I can diagnose and manage patients who may benefit from comprehensive orthodontic treatment and/or skeletal correction.” These statements were rated on a five-point Likert scale.

The surveys were administered in the last session of the D3 orthodontics seminars course. After completing a summative assessment, students completed the surveys anonymously, folded them in half, and placed them in a ballot box. Students completed the surveys before they knew their grades (for the assessment or the course). Data were calculated by percentage (Response/N,) and error bars were set at 95% confidence intervals using vassarstats.net. Internal consistency between statement pairs was calculated by crosstab symmetric measures to determine Cohen’s kappa coefficient values.

Results

The response rate for the 253 students ranged from 75–100% in the three groups. The lowest response rate (75%) was in the PF group. Ambiguous responses, such as indicating more than one response option or modifying response options (e.g., writing “B-”), were not included in the analysis.

Internal consistency was calculated for each group (Table 2). Internal consistency was poor between statement pairs related to attitudes toward instruction shown in responses to the questions “I would grade this” and “This course was.” Kappa values for PF, EM, and WC groups were 0.14, 0.04, and 0.08, respectively. The total kappa was 0.12. For the PF and WC groups, internal consistency was fair to good between statement pairs pertaining to confidence in abilities on the questions “I can construct problem list” and “I can diagnose and manage.” Kappa values for the PF and WC groups were 0.51 and 0.52, respectively. The EM group was less reliable, with kappa=0.35.

View this table:
  • View inline
  • View popup
Table 2

Crosstab symmetric measures between statements pairs: Q1 with Q2 and Q3 with Q4

A summary of responses to the four statements across the three feedback groups is shown in Table 3. Overall, the WC group had better attitudes toward the course than did the other groups (Figure 1). In the WC group, 95.4% graded the course above C (out of response options A, B, C, D, F) in contrast to 75% EM and 50% PF. In the WC group, 67.1% evaluated the course as very good or outstanding compared to 33.7% EM and 29% PF. Students in the WC group also had more confidence in their abilities compared to the other groups (Figure 2). In the WC group, 84.9% agreed that they can construct a problem list, treatment objectives, and management plan, in contrast to 62.7% PF and 69.7% EM. In the WC group, 90.7% agreed that they can make orthodontic diagnoses and manage patients compared to 80.9% EM and 77.3% PF.

Figure 1
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1

Responses to Q1 (I would grade this course) and Q2 (This course was)

Note: Q1 results were 95.4% of group written comments (WC) graded the course above “C” compared to 50% of group pass/fail (PF) and 75% of group emoticons (EM). Q2 results were 67.1% of group WC responded that the course was very good or outstanding compared to 29% of group PF and 33.7% of group EM. Asterisks denote statistical difference.

Figure 2
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2

Responses to Q3 (I can construct problem list) and Q4 (I can diagnose and manage patients)

Note: Q3 results were 84.9% of group written comments (WC) were confident in their treatment plan abilities compared to 62.7% of group pass/fail (PF) and 69.7% of group emoticons (EM). Q4 results were 90.7% of WC agreed they can make orthodontic diagnoses compared to 77.3% of group PF and 80.90% of group EM. The asterisk denotes statistically significant difference.

View this table:
  • View inline
  • View popup
Table 3

Each groups’ pass/fail (PF), emoticon (EM), and written comments (WC) responses for all four questions

Some of the contrasts among feedback groups affecting students’ attitudes toward instruction were significant at 95% confidence intervals. The WC group graded the course A or B; the PF group graded the course D or F. No respondents in the WC group graded the course D or F. The WC group responded that the course was very good or outstanding; the EM group rated the course as average; and the PF group rated the course as below average or awful. The WC group significantly agreed with the statement “I can construct a problem list, treatment objectives, and management plan” compared to the PF group. The other groups (EM, PF) had more neutral and negative responses (disagree, strongly disagree) to the same statement.

Overall, the WC group trended to positive responses compared to the other groups, while the PF group trended to negative responses. In both statements about attitudes toward instruction, the result was significant at the 95% confidence interval. Concerning confidence in abilities, the WC group trended toward positive responses that were significant relative to the PF group in one instance (“I can construct problem list, etc.”).

Discussion

Although prior studies have suggested that test-enhanced learning improves recall and may improve performance in summative competency assessments,3,4 there may be undesirable side effects on students’ perceptions of instruction.13 Through assessments, learners are made aware of errors, and this knowledge may threaten self-efficacy. The manner in which feedback is given to the learner may influence his or her receptivity to learning.

Our study emerged as a practical response to a shift from a traditional instructional method (classroom presentations, in-class exercises) toward a test-enhanced instructional approach with feedback on formative assessments. In the first cycle of the transition, students were unhappy, despite performing at higher levels than previous years. After the type of feedback given on formative assessments changed from P/F grades to emoticons, attitudes toward the course improved. Later, the type of feedback was changed to include only written comments that were specific to the task, without judgments (grades, emoticons, praise, criticism) that could be directed to the learner’s sense of self. The results were powerful and surprising. By changing the type of feedback, student attitudes and confidence improved. This finding is consistent with other studies in areas not related to competency-based dental education.9,14–16 Lipnevich and Smith reported that students who were shown a grade for their first draft essay performed less well on the final version compared to those who were not shown their grade.14 They concluded that presence of a grade resulted in higher negative effect and lower self-efficacy. Black and William similarly concluded that descriptive feedback in formative assessments (not grades or scores) led to highest improvement of performance.15 They compared effects between instructional groups, reporting that the group with comments demonstrated significant improvement (almost 30%), while the grade only and the grade with comments groups showed a significant decline. The results of our study support other published studies that assessed the effectiveness of feedback in learning.6,8,9

In our study, the surveys were administered under authentic classroom conditions. There were real world considerations limiting the length of the survey to encourage participation. The total survey consisted of 13 statements; four of the statements were extracted for the purpose of this study. It is a fallacy to make inferences based on one statement in a survey. Validity is increased with multiple statements intended to measure the same underlying construct.17 In this study, two separate statements designed to measure the same construct trended similarly, demonstrating benefits of feedback with written comments. The strength of these observations would be increased if there had been more statements related to the two constructs (attitudes toward instruction and confidence in abilities). This is important to consider since separate statements have limitations. For example, the confidence in abilities statements bundled different skills (construct problem list, treatment objectives, and management plan), so we could not determine if student confidence varied among these skills. Another limitation is that we could not determine any effect of the order of survey questions, which was found to affect responses in a previous study.18

Concerning internal consistency, there was poor agreement in response to holistic appraisal of a course using grades A-F and a similar statement using qualitative words (awful, bad, average, good, outstanding). There may be confusion concerning definitions of “average,” which relates to an individual’s perception of performance standards and possibly a culture of grade inflation since many students now consider a B the equivalent of average. After this study, we refined our word scale to more clearly order relationships between responses. The new five-point response scale replaces “awful” and “outstanding” with “very bad” and “very good.”

The internal consistency between the two statements concerning confidence in abilities was fair to good with the exception of the EM group (kappa=0.35). To rule out a group effect distorting results, we expanded the sample size to include other groups not part of the study that similarly received emoticon feedback. As the sample size increased, kappa coefficients increased from 0.35 to 0.51, demonstrating overall fair-good agreement between statement pairs (Table 4).

View this table:
  • View inline
  • View popup
Table 4

Crosstab symmetric measures for Q3/Q4 when N for emoticon (EM) group increased from 90 to 286

This study examining effects of feedback on student confidence and attitudes had some limitations. Despite the effects noted, it is unknown whether the differences in student confidence and attitudes among the feedback groups related to actual student achievement. In their study, Kruger and Dunning confirmed the frequent assumption that most people tend to overestimate their abilities compared to actual performance.19 Future investigations will consider effects on performance assessments. An additional limitation is this study took place in only one course in one dental school, so its findings are not generalizable to all dental students.

Conclusion

Learning is a multifaceted process that includes multiple cognitive domains, complex emotions, and dimensions of self embedded in a social-cultural context. In learning, feedback should be based in conditions that foster safety, trust, engagement, motivation, and commitment. The instructional target (goal, performance standards, success criteria, expectations) should be explicit and known to both the giver and receiver of feedback. Assessment must be aligned with the instructional target. When feedback is given, there should be sufficient time provided to the learner to act on it and opportunities for the learner to demonstrate improvement. Types of feedback directed to the self, including grades and emoticons, affect self-esteem and may hinder learning. Our study found beneficial effects of written feedback given to dental students in formative assessments related to attitudes toward instruction and confidence in their abilities. Descriptive written comments, without grades or emoticons and without praise or criticism, demonstrated a benefit in terms of students’ attitudes toward the course and confidence in diagnosis and treatment planning. Pass/fail grades appeared to be most damaging, especially concerning attitudes toward instruction. More research is necessary to reveal the true effect of feedback in learning. Attention to the type of feedback appears to mitigate some of the reported detrimental effects observed in test-enhanced instruction.

Acknowledgments

We thank Dr. Sarah Prehn, Dr. Nicolas Freda, and Dr. Jae Ik Kim for developing the project and provisional statistical analysis. We also thank Dr. Mal Jamal for statistical consultation and Ms. Eileen Rosa for organizational and administrative support.

Footnotes

  • Disclosure

    The authors reported no conflicts of interest.

REFERENCES

  1. ↵
    1. Lipp MJ
    . An “objectified” competency-based course in the management of malocclusion and skeletal problems. J Dent Educ 2008;72(5):543–52.
    OpenUrlAbstract/FREE Full Text
    1. Lipp MJ
    . A process for developing assessments and instruction in competency-based dental education. J Dent Educ 2010;74(5):499–509.
    OpenUrlAbstract/FREE Full Text
  2. ↵
    1. Freda N,
    2. Lipp MJ
    . Test-enhanced learning in competency-based predoctoral orthodontics: a four-year study. J Dent Educ 2016;80(3):348–54.
    OpenUrlAbstract/FREE Full Text
  3. ↵
    1. Roediger H,
    2. Karpicke J
    . Test-enhanced learning taking memory tests improves long-term retention. Psychol Sci 2006;17(3):249–55.
    OpenUrlCrossRefPubMed
  4. ↵
    1. Karpicke J
    . Retrieval based learning: active retrieval promotes meaningful learning. Curr Dir Psychol 2012;21(3):157–63.
    OpenUrlCrossRef
  5. ↵
    1. Hattie J,
    2. Timperley H
    . The power of feedback. Rev Educ Res 2007;77(1):81–112.
    OpenUrlCrossRef
  6. ↵
    1. Hattie J
    . Scholarship of teaching and learning in psychology. J Am Psychol Assoc 2015;1(1):79–91.
    OpenUrl
  7. ↵
    1. Kluger A,
    2. DeNisi A
    . The effects of feedback interventions on performance: a historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychol Bull 1996;119(2):254–84.
    OpenUrlCrossRef
  8. ↵
    1. Butler R
    . Enhancing and undermining intrinsic motivation: the effects of task-involving and ego-involving evaluation on interest and performance. Br J Educ Psychol 1988;58:1–14.
    OpenUrlCrossRef
  9. ↵
    1. Eva KW,
    2. Regehr G
    . Effective feedback for maintenance of competence: from data delivery to trusting dialogues. CMAJ 2013;185(6):463–4.
    OpenUrlFREE Full Text
  10. ↵
    1. Ende J
    . Feedback in clinical medical education. JAMA 1983;250(6):777–81.
    OpenUrlCrossRefPubMed
  11. ↵
    1. Ende J,
    2. Pomerantz A,
    3. Erickson F
    . Preceptors’ strategies for correcting residents in an ambulatory care medicine setting: a qualitative analysis. Acad Med 1995;70(3):224–9.
    OpenUrlCrossRefPubMed
  12. ↵
    1. Carey B
    . Frequent tests can enhance college learning, study finds. New York Times, 20 Nov. 2013.
  13. ↵
    1. Lipnevich A,
    2. Smith J
    . Response to assessment feedback: the effects of grades, praise, and source of information. Princeton, NJ: Educational Testing Service, 2008.
  14. ↵
    1. Black P,
    2. William D
    . Assessment and classroom learning. Assess Educ Principles Policy Pract 1998;5(1):1–65.
    OpenUrl
  15. ↵
    1. Baumeister R,
    2. Hutton D,
    3. Cairns K
    . Negative effects of praise on skilled performance. Basic Appl Soc Psychol 1990;11:131–48.
    OpenUrlCrossRef
  16. ↵
    1. Gliem J,
    2. Gliem R
    . Calculating, interpreting, and reporting Cronbach’s alpha reliability coefficient for Likert-type scales. Presentation at Midwest Research to Practice Conference in Adult, Continuing, and Community Education, 2003.
  17. ↵
    1. Krosnick J,
    2. Berent M
    . Comparisons of party identification and policy preferences: the impact of survey question format. Am J Polit 1993;37:941–64.
    OpenUrl
  18. ↵
    1. Kruger J,
    2. Dunning D
    . Unskilled and unaware of it: how difficulties in recognizing one’s own incompetence lead to inflated self-assessments. Personality Soc Psychol 1997;77:1121–34.
    OpenUrl
View Abstract

This article requires a subscription to view the full text. If you have a subscription you may use the login form below to view the article. Access to this article can also be purchased.

PreviousNext
Back to top

In this issue

Journal of Dental Education: 81 (5)
Journal of Dental Education
Vol. 81, Issue 5
1 May 2017
  • Table of Contents
  • Index by author

GTranslate

English French German Italian Portuguese Russian Spanish
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on Journal of Dental Education.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Types of Feedback in Competency-Based Predoctoral Orthodontics: Effects on Students’ Attitudes and Confidence
(Your Name) has sent you a message from Journal of Dental Education
(Your Name) thought you would like to see the Journal of Dental Education web site.
Citation Tools
Types of Feedback in Competency-Based Predoctoral Orthodontics: Effects on Students’ Attitudes and Confidence
Mitchell J. Lipp, Kiyoung Cho, Han Suk Kim
Journal of Dental Education May 2017, 81 (5) 582-589; DOI: 10.21815/JDE.016.021

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero

Request Permissions

Share
Types of Feedback in Competency-Based Predoctoral Orthodontics: Effects on Students’ Attitudes and Confidence
Mitchell J. Lipp, Kiyoung Cho, Han Suk Kim
Journal of Dental Education May 2017, 81 (5) 582-589; DOI: 10.21815/JDE.016.021
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo LinkedIn logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Linkedin Share Button

Jump to section

  • Article
    • Abstract
    • Methods
    • Results
    • Discussion
    • Conclusion
    • Acknowledgments
    • Footnotes
    • REFERENCES
  • Figures & Data
  • Info & Metrics
  • PDF

Related Articles

  • No related articles found.
  • Scopus
  • PubMed
  • Google Scholar

Cited By...

  • No citing articles found.
  • Scopus (1)
  • Google Scholar

More in this TOC Section

  • Developing an All-Digital Workflow for Dental Skills Assessment: Part I, Visual Inspection Exhibits Low Precision and Accuracy
  • Developing an All-Digital Workflow for Dental Skills Assessment: Part II, Surface Analysis, Benchmarking, and Grading
  • Preclinical Competency Testing in North American Dental Schools and Opinions About Possible Standardization
Show more Assessment

Similar Articles

Keywords

  • dental education
  • assessment
  • educational measurement
  • teaching feedback
  • competency based education
  • attitude
  • orthodontics

About

  • About ADEA
  • About the JDE
  • Editorial Review Board
  • Contact Us

Author Information

  • Submit a Paper
  • Submission Information
  • FAQ for Authors
  • Reprint Policies

More Information

  • Advertise
  • Subscribe
  • Email Alerts
  • My Saved Searches
  • Help

© 2019 Journal of Dental Education

Powered by HighWire