Skip to page navigation
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

OPM.gov / Policy / Assessment & Selection / Assessment Glossary
Skip to main content

Assessment Glossary

Assessment-related terms used throughout this website are listed below. Click on the term to view the definition.

When designing an assessment strategy and when selecting and evaluating assessment tools it is important to consider a number of factors such as:

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

Adverse Impact
A substantially different rate of selection in hiring which works to the disadvantage of members of any race, sex, or ethnic group.
Behavioral Consistency Method
Based on the principle that past behavior is the best predictor of future behavior. In practice, the method involves describing previous accomplishments gained through work, training, or other experience (e.g., school, community service, hobbies) and matching those accomplishments to the competencies required by the job.
Competency
A measurable pattern of knowledge, skills, abilities, behaviors, and other characteristics that an individual needs to perform work roles or occupational functions successfully. Competencies specify the "how" of performing job tasks, or what the person needs to do the job successfully.
Concurrent Validity
In a concurrent study, job incumbents (i.e., current employees) are tested and their job performance is evaluated at the same time. The relation between current performance on the assessment and on the job can then be examined. Evidence of concurrent validity is often substituted for predictive validity. Whether this is appropriate will depend on the type of measure and how similar the incumbent sample is to the applicant population.
Construct Validity
A construct refers to the underlying trait (e.g., intelligence, sociability) assumed to be measured by an assessment. Construct validation involves collecting evidence to determine whether the assessment does indeed measure the trait it was intended to measure.
Content Validity
Evidence (based on job analysis and expert judgment) the choice of items or tasks included in the assessment logically match or represent those tasks or competencies required by the job.
Criterion-related Validity
The degree to which performance on an assessment procedure predicts (or is statistically related to) an important criterion such as job performance, training success, or productivity. There are two major types of criterion-related validity, concurrent and predictive.
Face Validity/
Applicant Reactions
An applicant's perception of how valid a measure is based on simple visual inspection. Though face validity alone cannot be used to support the use of an assessment, it is important because it promotes cooperation and acceptance of the assessment process on the part of applicants.
Incremental Validity
The extent to which a new assessment adds to the prediction of job success above and beyond the forecasting powers of an existing assessment.
Job Analysis
A systematic examination of the tasks performed in a job and the competencies required to perform them. For more information on Job Analysis, please view the Job Analysis page.
Predictive Validity
In a predictive study, job applicants are tested and their performance evaluated at a later time, usually after being on the job for 6 months or more. The relation between performance on the assessment and on the job can then be examined.
Reliability
The extent to which applicants’ scores on an assessment are consistent given that the applicants are reexamined with the same or equivalent form of an assessment (e.g., a test of keyboarding skills).
Subgroup Differences
The extent to which the assessment method has been shown to result in different pass (or selection) rates, average scores, or prediction errors across groups, typically based on race, ethnicity, or gender.
Subject Matter Expert (SME)
A person with bona fide expert knowledge about what it takes to do a particular job. First-level supervisors are normally good SMEs. Superior incumbents in the same or very similar positions and other individuals can also be used as SMEs if they have current and thorough knowledge of the job's requirements.
Task
Tasks are activities an employee performs on a regular basis to carry out the functions of a job. Tasks typically begin with an action verb (e.g., analyze, build, develop) and should specify an observable action.
Utility/
Return on Investment (ROI)
The extent to which the benefits gained from using the assessment method outweigh the costs of development and administration.
Validity
The extent to which assessment scores are related to current or future job performance (or some other work-related outcome such as training success, productivity, absenteeism, turnover). For types of validity evidence, see content validity, construct validity, and criterion-related validity.

Back to Top

Control Panel