There is a substantial amount of work that ought to be published or available to others so that they do not repeat the unnecessary experiments. Within this context, what is good data and what are good experiments are two separate issues. An example of data that ought to be available for the users is the genome project data. A good experiment is a thoughtful experiment that asks a question and is designed with the best controls within the constraints of the society at that time. Once we agree that there is a place for good experiments and good data, we can talk about hot papers.
Hot papers are usually those that a group of specialists in a field decide that these would have wide implications. Whether they do or not is a separate issue. It is these hot papers which the so called "high impact" journals tend to pick up. Often, there would be a subject which would become trendy for a while and the hot journals would pick them up. However, if the paper is hot it will be picked up despite the rejection by the hot journals. Example are two papers by WW Cleland in BBA around 1960 which were rejected by a hot biochemical journal and turned out to have much greater impact. So any university which is evaluating papers based on journals is making a mistake. If the interviewing faculty have any interest and vision, they should read the pape (God forbid!). Using actual citations would be better than using a hot journal - by no means I suggest using citations as the sole criterion but only better than the journal as the criterion.