Strong theoretical arguments imply that for directly representing and interpreting statistical data as evidence, the proper vehicle is the likelihood function. These arguments have had limited impact on statistical practice. One reason for this is that likelihood functions are explicitly model dependent and thus appear to be inherently nonrobust. In this article we examine the concept of robustness as it relates to likelihood functions. We note five ways that likelihood functions can be used to represent and interpret statistical data as evidence. These various uses suggest corresponding senses in which one likelihood function can approximate another, and these in turn suggest different senses in which a likelihood function can be ''robust.'' We establish some general relationships among these senses of robustness, and examine two general techniques for producing robust likelihoods.