Publication venues and their impact on a researcher's career

When publishing I would always prefer to go with SciPost because it aligns much better with my personal view of how scientific publishing should work. However, lately I’ve been thinking about the impact of this choice on my future research career (post-PhD). A couple of questions come to mind:

  • I would mostly publish to a still relatively unknown journal. Would that get a lot of raised eyebrows in the future? In my field lots of people publish mostly in PRB, but that’s a widely known journal.
  • I would be missing “big names” like PRL, PRX in my CV. Now I personally don’t care about this. But again, I’m wondering how much of an impact it could have in my career.

Interested in hearing everyone’s thoughts! :slight_smile:


I should also add that my field is theoretical condensed matter. These considerations are likely even more important in experimental fields, where having Nature/Science publications seem like a hard requirement if you want to have a future in science…

Great question! Obviously I’m on the same side of the table in the hiring process as you are and would appreciate someone on the opposite side to chime in too, but here’s two cents based on my impression of how we have interviewed and discussed about candidates among ourselves.
It doesn’t seem to me the journal of the publication plays any major role except for the first couple minutes when we look at the CV. Merits of the works are judged by themselves in these occasions and people in the same field usually have their own, often strong, opinions of where one work “deserve” to be published anyway. So I’d say the journal of publication matters more to those who are more distant from the field of research. Those who are willing to host you for a few years probably are willing to spend the effort to read beyond the cover of the book.
On the other hand, we probably all have indeed judged a book by its cover… But I think even there the choice of journal might become a part of the cover too. Fwiw I myself would understand a publication list consisting of mostly SciPost to be carrying some sort of statement about the tastes and beliefs of the author and receive it positively. Not sure whether this applies to PIs often, but just saying even viewing from a utilitarian angle, choosing to publish in an unconventional journal migth be another type of signal itself, as is the choice among conventional journals.

1 Like

Indeed, it’s definitely a great question, and at this point I cannot draw a definitive conclusion. I can list the considerations that I have.

  • I still, disappointingly, see researchers that do consider it OK to judge an applicant just by looking at a journal names because it indicates at the very least the “willingness to play the game”.
  • Furthermore, journal-based policies are often institutionalized in several countries in Eastern Europe, Asia, and probably others. There publications in a new journal don’t count for anything because bureaucracy didn’t adapt.
  • The degree to which SciPost itself stays new and radical is dropping rapidly. I think this Summer it will get an impact factor and become a journal like them all.
  • I don’t expect it will become a “high profile” journal, or at least not soon.

Overall I am leaning towards considering publishing exclusively in SciPost a strategy that increases the risk of you being negatively assessed, albeit within theoretical condensed matter this risk already manageable, and is further rapidly diminishing.


This question has always been one of the top questions asked about SciPost.

In an ideal world we wouldn’t care much about the IF, but although DORA has been around for a long time, few places really implement it, so it’s understandable for people to be concerned.

For SciPost, one of the main problems with the IF is that it takes years to obtain, adapts very slowly to changes, and is thus the real innovation killer in publishing.

We’re coming over the hill now, however. SciPost Physics will obtain an Impact Factor at the end of June 2020; this should place it somewhere between PRB and PRL. Not bad for a recent title no?

Going further (and also in response to Anton), I do expect SciPost Physics to become quite high profile, we are currently sharpening the criteria (linked with the launch of SciPost Physics Core). It will however again take a couple of years for the sclerotic IF to catch up to this new reality.

So afaik, I think there is a lot of evidence now that publishing in SciPost Physics will actually help your career more than publishing in many other journals.


Without specifically concentrating on SciPost, let me comment on how I see the situation in general.

The impact of your research is essentially independent on where you publish it. High-impact journals are of course exactly high-impact because their impact on average (measured for example by impact factor) is higher, However, this is on average. If you choose to send articles with lower impact journals, this does not mean by itself that the articles ar worse. Those of us who are at advanced stages of scientific career, have a lot of examples of articles which have been published in high-impact journals and still had no impact, or those which were rejected and became high-profile.

In terms of your standing in the field (let us define it as your relation with people whom you can reasonably meet several times per year at conferences and research visits) it also does not matter much. These people are supposed to understand what you are doing, how important it is, and they do not particularly care where you publish. You should be helping them by talking about your own results, preferably before they get published.

However, there are many situations when you will be judged by people who can not or do not have any time to understand the substance of what you have done. This happens if you are looking for a new job, trying to make a new career step (these two situations are mitigated by recommendation letters, but not fully),or apply for funding. In these situations people would be looking for formal indicators. In most places now you may not show the citation record, and then the only two things left for someone who does not know you are (i) in what journals you publish and (ii) who are your coauthors. We can discuss that this is bad practice and should be stopped, but things are what they are. I would say in these situations, if you have more high-impact journals, you typically score higher.


Thank you everyone for the very thoughtful replies. This has been really helpful!

@gsteele13, I think everyone would also be really interested in hearing your opinion :slightly_smiling_face:

As you might know, I’ve also given the topic of the role of journals some thought:

I am fully supportive of open publications and a renewal of the publication process.

(And, actually, in addition to SciPost, someone who is very active in this is actually the Nature journals themselves, in particular the recent initiatives they have been pushing to be more open about reviews, publishing review reports, and encouraging reviewers to volunteer to identify themselves, even during the review process.)

But although I like the initiative, SciPost for me missed an important factor: what is, what I like to call, the “perceived impact level” they are targetting?

In shorter words: is SciPost targetting a “quality level” of PRB? Or PRL? Or Nature? Or Nature Physics? Or Solid State Communications? Or European Journal of Physics E?

(A SciPost editor once told me they perceive themselves to be something like PRB “level” papers.)

I like to call in “perceived impact”, or maybe “estimated impact”, since, of course, not even the smartest editors and smartest reviewers in the universe can assess the importance that a paper will turn out to have in 5 or 10 years.

However, whatever you might think of it, there is a process by which your work is reviewed by an editor and by multiple experts in the field who read your work and try to make an estimate of the “level” of your work compared to other works they have read in the journal you have submitted to. This is, at least, something.

And if I am in a committee and I have say 2 hours (in the evening after the kids go to bed since I don’t actually have time to do it during the day) to read 30 CVs and make a snap judgement of who I would invite and who I would not invite for an interview, “something” is highly valuable for me. It is just not possible for me to personally read every one of your papers in depth to determine if your work is of high quality or not: I just don’t have time. And there is no magic fairy that is going to somehow “create” more time for me to do this.

Upshot: Just everybody publishing all their work in SciPost is not a solution, it does not address one of the key things we rely on journals for: an assessment by experts in the field of the perceived impact / importance of a work.

Of course, the review process itself is not perfect, and in particular, not nearly transparent enough. There are attempts to make it more open, such as those by SciPost and Nature. But there is not yet a viable solution: for example, for Nature papers, the review discussions of papers that are rejected are NOT published.

A few years ago, myself and some of my colleagues were brainstorming about a solution to this. The idea is as follows:

  • All papers in physics are submitted just to one place, and arxiv-like central server
  • These are then sent out to review by certified experts in the field (based on a central database, also to distribute load)
  • The authors indicate what they think the “perceived impact level” is by clicking on a button. I would even suggest that we re-use the names of existing journals for these buttons. A “Nature-level” button, a “Science-level” button, a “Nature-Physics-level” button, a “PRX/PRL-level” button, a “Nature-Comms-level” button, etc etc.
  • A small set of experts perform a full technical assessment of the work, based on an publicly available dialogue
  • There is an editor who then makes a decision that the manuscript is technically correct or not (ie. claims supported by evidence) after fixed number of rounds of review
  • After the technical review, the manuscript is sent to a larger number of reviewers for an “impact assessment” review (ie. quick read). They click on one of the “perceived impact level” buttons to indicate what they thing the equivalent journal would correspond to, and provide at least a few sentences justifying their choice. These are also public (though likely anonymous)
  • Finally, the editor makes a decision of assigning the “impact level” based on the “impact assessment” reviews, and the manuscript becomes recognised by the community as a “Nature-like” paper, or a “PRL-like” paper, or a “PRB-like” paper, etc.

A huge benefit of this is that there is only ONE review. You do not waste your time and the editors time resubmitting to a different journal. You do not waste reviewer time by needing new reviewers each time you submit. The process is fully open and transparent, this would also help in the objectivity I think a lot (it can still be anonymous). The database would also be able to cross check for correlations and identify reviewer bias.

Crucially, it would also keep the certification of recognition from your peers. You would be able to point out on your CV that your peers felt that your work was relevant and important at the level of so-and-so journal. And this you can put on your CV, and then people like me in committees can use this information in a valuable way (along with the other many pieces of information we use).

I think it would be a great idea, but it would need a lot of things before it could start, like money (EU?) to set up infrastructure, hire editors, etc. I am not going to do this in my free -2 hours per day.

More importantly it would need FULL support of the community. People would have to be on board. People who are publishing Nature papers now would have to accept that “Nature-like” papers are just as good. Of course, why wouldn’t they be? The only diffference is that there is no nature editor. But people would have to really accept this. And ideally the people who submit now to Nature (particularly the senior established one) would have to put their money where their mouth was and really submit their new papers to “Nature-like” instead of Nature!

If we don’t have a critical mass of people who do that right from the start, then we are dead in the water, hopeless. We want to be mainstream, not a bunch of hippies on the sidelines complaining about how unfair the world is. We would need to coordinate large scale community support before the launch and make sure we make a running start.

You asked for my thoughts, and there they are :slight_smile: I think it would be a cool idea, but for now it mostly lives in my head (and a google doc I share with some collaborators). I’m not sure how to take it to the next step, but if anyone has some ideas, time, and energy, I am happy to help.


oh, Hi @jscaux jean-sebastian, I didn’t see your post up there! I hope you are not too offended by my SciPost bashing… :slight_smile: I do think it’s a great idea, and a great platform.

I am curious what you think of the idea I’ve been brewing with some colleagues. I have no idea if it would ever be possible.

BTW, I was discussing with Anton the other day the topic of “Technical Notes / Reports”:

I used to read loads of these (for example from Philips and RCA).

Currently, there does not seem to be any kind of avenue for publication of such a paper, which is a real shame. Do you think it would be possible via SciPost? Something like “SciPost Technical Reports”? Maybe with just a quick screen by a technically trained editor to kick out the spam, and then public discussion as a form of review?

Curious what you think


Dear Gary,

Thanks for all these comments! It’s nice to have a detailed discussion and to share some nice ideas (it’s perhaps also a good occasion for me to correct some misperceptions).

You write:

SciPost for me missed an important factor: what is, what I like to call, the “perceived impact level” they are target[t]ing?

I have always been clear that we aim for Scipost Physics to become the highest impact venue in the field. The recently-launched SciPost Physics Core will be the normal/high impact venue (think Physical Review-level). When we’ll also be operating in other fields, we will also have SciPost Selections (see the general architecture we plan at ). I understand it’s not clear in everybody’s head at this stage (and I can understand why: it’s a time-dependent thing, and is not fully installed yet!) but hopefully this will become clearer as time goes by.

You write:

Just everybody publishing all their work in SciPost is not a solution, it does not address one of the key things we rely on journals for: an assessment by experts in the field of the perceived impact / importance of a work.

Well besides what I wrote above, there are a couple of ways we have installed to address this:

  • first of all, the report contents being openly accessible means that competent people can quickly dive into the evaluation material and form a detailed, evidence-based assessment for themselves (and not “adjudication by tweet”, heaven forbid);
  • we still aim for a mild container-level layering through the simply hierarchy of journals we propose.

Of course this doesn’t go all the way to a single-digit-like evaluation summary, but I personally think that we want to avoid this at all costs.

In your nice Zenodo presentation, there are just a couple of things I’d like to point out:

On p.34 you misrepresent SciPost as an overlay journal. This is not the case: an overlay relies on alternative infrastructure for hosting etc. SciPost is a full-fledged publisher, including all the infrastructure stack for all workflows relating to preprint, editorial, production, metadata etc.

You say SciPost “is not enough”. You might find it amusing that I often get criticized for SciPost being “too much”. Glad to count you among the radical revolutionaries!

Now about your plan for a solution: by all means do forge ahead and implement it. As far as I’m concerned, and with the experience I now have, I’d however like to make the following comments:

  • don’t use terminology like “perceived impact”. This discourages meaningful, content-based professional-caliber evaluation, and encourages “shoot from the hip”, contemporary politics-style evaluation. I see so many dangers in “twitterizing” science (and am personally at war with this behind closed doors), and this could involuntarily help slide things further into this sorry direction.

  • don’t use journals as evaluation levels, for many reasons: it’s too perception-dependent; it’s time-variable; most importantly, it gives these other venues the leadership in defining what quality is. Defined like this, your solution can only ever be a derivative-class publishing venue.

  • you might be severely underestimating the amount of work required to get this off the ground on the technical level: you’ll have to educate yourself about a million things about publishing standards, metadata, web technology (including security), archiving, business, employment, finances etc. I could literally give a month-long workshop on all the things that are required at the backend (which nobody sees). There are no “canned” solutions offering you enough flexibility to implement what you have in mind.

  • you will not be able to fit this scheme into the installed recognition protocols: DORA itself isn’t really succeeding in reforming things, in most part because scientists are the ones who are not adopting new evaluation workflows (due to the reason above).

  • you will find it very challenging to get academics on board: academics have the reflex of first disagreeing with everything, not to “enthusiastically jump in”. You will only get the on board if it’s already working, leaving you with the classic catch-22. Don’t underestimate how conservative academics can be (this is in fact actively exploited by corporate publishers). You will have to keep yourself motivated despite your colleagues remaining cold-hearted and immobile when presented with your fully-built-and-ready-to-roll solution.

As far as I’m concerned, what I am trying to do with SciPost corresponds to the right balance between what I think is needed, and what the community is able to digest. I would dare to hope that your perception that SciPost “doesn’t go far enough” has more to do with the latter, and that as time goes by, the community will slowly move in the right direction to make deeper reform possible.

About Technical Reports

That’s a very interesting and workable suggestion as far as I’m concerned. This could be implemented at SciPost with just a few clicks (I’m not joking) if it follows a general workflow similar to the one we have for our different journals. The refereeing could be simplified (super simple criteria; shorter refereeing period: 2 weeks) as well as the decision-making process (e.g. the decision could be fixed by a single editor in charge).

If you make a concrete suggestion for all the fields needed for a journal’s description (see for example ) then such a proposal could be looked at by our Advisory Board and Editorial College (Physics) and if they agree, we could install this.

1 Like

Hi @gsteele13, thank you for the thorough reply! I have a couple of problems with your rationale.

Upshot: Just everybody publishing all their work in SciPost is not a solution, it does not address one of the key things we rely on journals for: an assessment by experts in the field of the perceived impact / importance of a work.

I can’t speak to other people’s experiences, but personally I’ve never (in my admittedly short career) relied on journals to determine what is important to read. By the time something gets published it’s usually old news anyway. An example from my own group: Enhanced proximity effect in zigzag-shaped Majorana Josephson junctions is still unpublished, yet several groups have been working on zigzag junctions for several months now.

I suppose people do rely on journals to assess candidates as you say here:

And if I am in a committee and I have say 2 hours (in the evening after the kids go to bed since I don’t actually have time to do it during the day) to read 30 CVs and make a snap judgement of who I would invite and who I would not invite for an interview, “something” is highly valuable for me. It is just not possible for me to personally read every one of your papers in depth to determine if your work is of high quality or not: I just don’t have time. And there is no magic fairy that is going to somehow “create” more time for me to do this.

But in many cases it is hard (if not impossible) to estimate the impact of a paper as you mentioned. So wouldn’t a better solution be to advocate for academics to have more time to thoroughly evaluate candidates and their work?

Yep, that would be a better solution. And it has been advocated already. Specifically, all Dutch universities have signed DORA, which argues exactly against using journals for assessing individual researchers.

@gsteele wouldn’t you say that the decision of whom to invite or not to invite for an interview is important enough that it would be reasonable to expect those making it to allocate some thought to it? If the main thing you do is look at the journals, I would trust more someone that is perhaps less experienced than you, but that dedicates more time to the evaluation.

1 Like

Thanks Jean Sebastien! I think it is great to have (and share!) these discussions. They are super-important.

I think it is clear this is a huge task (which is why it is still a pipe-dream floating around in a google doc, and now also a public forum :slight_smile: )

It would need very substantial financial support from the start. Let me try a finger in the air: to replace all journals in physics, I am going to hypothesise you would need a staff of at least 100 people. Say 100 kE / year for salary and benfits: this would need something like an operating budget of 10 ME per year to run, is my shot-in-the-dark completely uneducated guess, could easily be missing a factor of 10. In any case a lot of money!

Of course, it is not a lot of money to the number of 17 billion USD as the profit margin of academic publishing that I googled when making my slides. Say we assume they clear 25% on average as a profit margin, this would place their revenue at something like say 60 billion USD per year.

(Compared to 10 ME, seems like peanuts? Makes me think that somewhere I missed something? Of course, we would be doing a lot less “other” stuff than the publishing companies do. Or maybe physics is a small piece of the pie across the board?)

With the EU making a lot of noise about “open science”, one could hope that they would be willing to put their “money where they mouth is”: fork out say 50 ME for 5 years to run a pilot project to eliminate physics journals.

My personal take on this: my proposal would already be a huge change for people to accept. It would be helpful for adoption if people did not also have to abandon the (maybe flawed) values-system that they have built their career on at the same time.

Of course, I would also strive to define more objectively what the meaning of “Nature-like” is. And in time, once people adopt, we could probaably drop these terms. But words are important, people identify with words, and feel threatened by uncertainty when you change them.

And maybe I am creating a “derivative-class” publishing venue. This does not bother me at all. For me, step one is to pry the publishing system out of the hands of the commercial companies and provide efficient, transparent, non-restricted publishing. And my feeling is that the fastest way to get the community to accept this is to transplant the existing system entirely in place.

(Note that the one assumption in the last sentence is that we all agree that editors do not play an actual role in assessing the importance of a scientific manuscript, and that this could be done entirely by the community, something I hope that most scientists would agree with…)

Once we have transplanted the entire system in-place into our own hands, we can then start to have a deeper discussion with each other about philosophical issues. What do we actually value in science? What should we assign value to in a scientific publication? These are much longer discussions that the community needs to have. Maybe generations of scientists even. But I don’t want to wait that long to get the publication system out of the hands of companies who earn 40% profit margins.

Of course I look at more than journals. But to be clear, I do not start reading papers in depth. It’s just not feasible. When reading the research statements, I might be triggered, click on one, and give it a scan. But there is just not time for me to go into depth on the content of the papers people publish, also because sometimes it is completely outside my field.

I completely agree! My only personal source of new papers is the arxiv, which I used to scan every morning. I must say, when the shit hits the fan with teaching (which it does more and more these days), I may miss a week or two. And then I just hope that my students will post papers in our mattermost “papers and preprints” channel.

And, of course, I’m not stupid or naive. I also look myself if I think that (with a good guess sometimes just from the title) a paper is in Nature Physics because it is the “favourite flavour of the day” or if it really includes cool new stuff.

There are certainly papers, for example, in Nature Physics where I think “ok, if they didn’t make a big hoopla in the intro about this being important for this or that, this would be an APL”. And there is certainly a Matthew effect (either because of reviewer bias = “scary”, or because Matthew has more experience in knowing how to write a papers for these journals). And these are all things I take into consideration when evaluating the publications of a potential applicant.

But in general, when I read a Nature Physics paper, I usually think to myself “that was a pretty cool paper”. Yes, I know, this is not quantitative, not quantifiable. But not all scientists work on a cold, hard numbers for everything: we also work with our gut, our instincts, and our feelings. And it is the basis of the “brand recognition” of the journals that we are not going to get rid of in the community too easily.

Well, I do not think APS makes 40% profit on publishing, and I still remember when PRB was considered a very good, and PRL an excellent publication even for experiments, and PRX was not even on a horizon. And then the glossy journals came, and everybody started to send all scientific production there. May be one could ask oneself first why this happened, before trying to replace the whole system.

True, although same is in principle true for AAAS, but the science advances fee is $4500.

(Although this is much less than the 20 kE that a nature physics editor told me they would have to charge if they were open access?)

Prestige happened. It was already there. We just got more obsessed with it. “My awesome paper is much better than your awesome paper”. (Before it was "my PRL is much more important that your SSC. With Nature/Science, people just got a much bigger stick.) Other things also happened: we got much better at communicating in scientific writing. The way scientific papers are written and presented is much different than the PRLs of 1988. People demanded more of figures, PRL dragged their feet, and then got left behind. Also, the rise of supporting information came: show me the details, don’t just tell me to trust you. I think this is good, and an evolution of this is now open data. Writing papers is, and always has been, evolving. The traditional publishers we used to use just got left behind is my feeling, is part of it.

For me, though, there are just as many things just as broken with APS as with any of the “glossies”. I feel that the whole process needs a big kick, towards more open, transparent, and efficient reviewing and publication. Going back to 1990s APS is not going to fix it: their journals are dominantly closed still, their reviewing process is just as intransparent and inefficient.

The easiest way I see forward is to first transfer the entire system, the full stack of journals, into our own hands. First fix the lack of transparency and the tremendous inefficiency. Keep for now the same “system”: our same perception of values. Then, over time, with the system in our hands, we can have a discussion about a re-evaluation of our core principle values.

In a utopian world, yes. But unfortunately, it’s just not the case. With continually rising expectations, more students, fewer teachers, and less funding for the same number of scientific staff, I do not see the “spare time” I have increasing any time soon…

Sounds great! I’ll brainstorm a bit when I have some free time :slight_smile: and maybe ask some input from people