this software engineer talks about “writing code,” which may not seen relevant to our work, but if you read a bit of the article, you’ll see that he is also describing study design, measurements, data files, variable transformations, and statistical tests. when we do research, we make a million decisions and not all of them are “perfect.” every decision has an impact of some kind, but he is correct in saying that no research is ever perfect. nothing about life is ever perfect. so if you think your work is not good enough to be sent to a conference or not good enough to send to a journal, then you need to rethink. the entire system of blind reviews is designed to screen out studies that may not be “good enough,” but it is not designed to select “perfect” studies. when your work is sent out to the academic world via conferences or journals, it then goes through another kind of evaluative process: people read it and build their research on it (a compliment to you), they ignore it, or sometimes suggest that there are problems. this is the evaluative part of the scientific method. if you don’t send your work out — if it lives only in your computer — then you are short-circuiting the development of your field. send your work out! if it is rejected, send it out again! and again! and again! rewrite, reanalyze and send it out again! it is your duty to do so.