Script readers are powerful gatekeepers. They read and rate scripts on behalf of producers, studios and competitions, meaning that what they think of a script is critical.
Scoring well with readers could lead to your screenplay reaching the desks of the great and the good (who are hopefully also the rich and the powerful). Scoring poorly could mean that all the countless hours you put into your screenplay will just have been “character building”.
Script readers’ work is conducted in private and their feedback is rarely shared, even with the screenwriters they are rating. This means there is very little empirical research into what readers think a good script looks like.
Given the critical role they play in filtering scripts, this lack of data is a severe handicap for any aspiring screenwriter.
To tackle this, I partnered with ScreenCraft to crunch data on over 12,000 unproduced feature film screenplays and the scores they received from professional script readers.
The final results reveal patterns in how script readers rate scripts, and what you should avoid if you’re looking to maximise your script’s scores.
You can read the full details of what we found via the free 67-page PDF report. This article summarises some of the key points, although it can only scratch the surface, so for the full picture, I recommend you download the report.
The article below pulls out eight of the most practical findings for screenwriters looking to maximise the scores they receive from professional script readers.
Tip 1: Know thy genre
The script readers in the research dataset were asked to provide scores for a variety of specific factors such as plot, tone and concept. I used this to track how important each of these factors were in the success of scripts. The higher the number, the greater the level of correlation between that factor and the script’s overall Review Score.
The biggest correlations for success are within the subcategories of characterization, plot and style. Among the least important factors are formatting, originality and the script’s hook.
The chart above shows the data for all scripts in our dataset, but there were differences between genres. There are charts for eleven genres in the full report (pages 11-16) but to give you a sense of what I mean, here is the one for Family scripts, a genre which places the highest premium on catharsis.
Tip 2: If you’re happy and you know it, redraft your script.
I measured the average sentiment of each script and provided an average value of between minus one (i.e. entirely negative) and one (i.e. entirely positive). A value of zero would indicate that the script contained an equal number of positive and negative elements.
Drama and Thriller scripts have the strongest negative connection between their average sentiment value and Review Score. Dramas with a sentiment value of between 0.20 and 0.25 receive an average score of 4.68 out of 10, whereas much more negative films (i.e. those with a sentiment value between -0.20 and -0.15) received an average score of 5.85.
My reading of these findings is that film is about conflict and drama. For almost all genres, the happier the scripts were, the worse they performed. The one notable exception was Comedy, where the reverse is true.
Tip 3: Some stories work better than others.
Using sentiment analysis, the vast majority of scripts can be grouped into one of six basic emotional plot arcs. It’s hard to summarise such a complex topic in this short article so I suggest you refer to the full report (pages 19-27) but to give you a flavour of what I found, below is the chart for Fantasy scripts.
Fantasy scripts which use a ‘Rags to Riches’ arc (where average sentiment rises as the script progresses) perform much more poorly than those using a Cinderella arc (where the sentiment rises, falls and then rises again).
Tip 4: Swearing is big and it is clever.
There is a positive correlation between the level of swearing in a script and how well it scored, for all but the sweariest screenplays.
Tip 5: It’s not about length, it’s what you do with it.
The exact length doesn’t matter too much, so long as your script is between 90 and 130 pages. Outside of those approximate boundaries scores drop precipitously.
Tip 6: Don’t rush your script for a competition.
The majority of the scripts in the dataset were submitted to script readers as part of a screenplay competition. When I looked at the delay between when a script was last saved and how it performed, I found a fascinating correlation – the closer to the deadline a script was finished, the worse it performed.
I interpret this to mean that if you’re rushing a script for a deadline then you’re not going have spent enough time re-drafting. Conversely, it’s not surprising that scripts which had ample time to be improved and tweaked perform better.
Tip 7: VO is A-OK.
Some in the industry believe that frequent use of voiceover is an indicator of a bad movie, however I found no such correlation. May I politely suggest that any complaints on the topic should be sent to editors, rather than writers.
Voiceover is much more likely to be used in Action and Sci-Fi scripts than in Westerns or Historical scripts.
Tip 8: Don’t worry if you’re underrepresented within your genre – it’s your superpower.
Gender is a complicated topic and I have put far more detail on this topic and our methods in the report (pages 61-62). For this article, I will just share an interesting finding relating to how the scores differed by gender and genre.
Female writers outperform male writers in male-dominated genres (such as Action) and the reverse is true in female-dominated genres (such as Family).
My reading is that when it’s harder to write a certain genre (either due to internal barriers like conventions or external barriers like prejudice) the writers who make it through are, by definition, the most tenacious and dedicated. This means that in a genre where there are few women (such as Action) the writers that are there tend to be better than the average man in the same genre.
Bonus Tip: ‘Final Draft’ writers outperform writers using other software
There is a correlation between the quality of a script and the screenwriting software used to write it. Scripts written in Final Draft performed the best (an average score of 5.3 out of 10) whereas scripts written in Celtx performed much worse (an average of 4.7)
It should be noted that I’m not suggesting that the programs are affecting the art. There are likely to be a number of factors contributing to this, not least the fact that Celtx is free to use, meaning that more early-stage writers use it than its paid competitors.
This project is not about measuring art or rating how good a story is; it’s about decoding the industry’s gatekeepers. Rather than suggesting “this is what a good script contains,” we are saying “this is what readers think a good script contains”.
In the real world, this distinction may not matter as readers are an integral part of the industry’s vetting process. But it is important to remember that all the advice to screenwriters in this article and the full report is in relation to the data and through the lens of what script readers have revealed in their scores.
The most talented writers can overcome most, if not all, of these correlations. They can make the impossible possible, spin an old tale a new way, induce real tears over imagined events and lead us to root for characters we know to be doomed.
Privacy was something we took seriously throughout this project. ScreenCraft provided the score data in an anonymised form and they did not provide data which would have provided deeper insights at the cost of reasonable privacy. We fully support their efforts to balance educational research aimed at helping screenwriters with protecting the privacy of the writers involved.
I am very grateful to ScreenCraft for partnering with me on this research. It simply would not have been possible without their data, their trust and all the help and advice along the way.
I was ably assisted in the research by Josh Cockcroft and Liora Michlin. Their input was vital to make sense of such a large dataset of scripts and scores, as well as being able to present it in a digestible manner for screenwriters.
This research was funded by proceeds from my last major project, The Horror Report. The Horror Report was published via a ‘Pay What You Can’ model, with all the income going to support future film data research.
I am very grateful to everyone who purchased a copy and especially to the generous people who chose to give more than the minimum. The script readers research simply would not have been possible without such contributions. Thank you.
Next week I’m going to share another aspect of this research – details of what the average screenplay looks like.
Thank you for sharing this amazing work. It is undoubtedly of great help to keep mastering our craft and travel deeper into the script readers’ mystery.
Fascinating insights here as always, but as you say yourself (and it is the most critical insight)
“The most talented writers can overcome most, if not all, of these correlations. They can make the impossible possible, spin an old tale a new way, induce real tears over imagined events and lead us to root for characters we know to be doomed.”
My observation would be to query the use of the word “gatekeeper” in this context. There are of course many genuine “gatekeepers” in the movie business, some of whom exercise enormous power, but as regards script readers this is not in my view an apposite or satisfactory metaphor. The function here is one of sifting, selection and quality control and is rarely exercised, at least in my bit of the universe, by a single individual working in isolation – at least as far as the better candidate scripts are concerned. It is a more collaborative process than the loaded word “gatekeeper” would suggest.
Good points. I’d say that while script readers don’t hold anywhere near the power of a studio head (or even an investor) they can still have a significant impact on the journey of an aspiring screenwriter. The vast majority of scripts receive a ‘pass’ and so could end their journey before it’s even begun. Likewise, within script competitions, it’s only very high placing scripts that will get noticed.
In both cases, the immediate challenge the writer has is to get the reader to score the project highly. Hopefully, some our findings can help around the margins (notwithstanding that talent and hard work are needed above all else!).
Thanks for this info! Very cool and interesting. Loved it.
Thank you for this! It’s very helpful — especially, “Tip 5: It’s not about length, it’s what you do with it.” Whether it was a rom-com or a thriller, I had a couple of Coverage services say that 109 and 117 pages respectively were “too long.” This anonymous so-called “pro” had the nerve to literally tell me that my rom-com SHOULD BE UNDER 100 PAGES! I was so furious I contacted a friend who is a longtime associate in Steven Spielberg’s office, and he emailed me back: “That’s ONE person’s ridiculous opinion. We don’t File 13 any well-written script just for page length.” He suggested I download rom-com scripts and check them for PAGE length not SCREEN length. I was amazed to find 15 rom-coms that were all 110 pages or longer. (The Big Sick = 114, Notting Hill = 124, Something About Mary = 134, The Holiday = 138, etc.) Again, thanks! I’ll share this info with the barista or supermarket checker/film school student that cost me $149 for his/her lousy advice.
I’m not yet sure how I will use the statistics you gathered, but I appreciate the level of effort you put into it. I read your entire report and believe it will eventually affect some writing choices that I make.
Another fascinating work. Was this research done on one contest only, or several? I admire the work you are doing, this certainly will start to fill in gaps. I may have read this wrong, but you analyze the “date of last save” in a way that seems inaccurate. In my view it is wise to read through one’s work one last time before submitting. As a habit of saving work, that last little check would mean the most recent save is interpreted in the report as indicating rushed work. What if an address or phone number had changed? A wise writer will finish a work, and let it simmer but then go back for revisions with fresh eyes.
thanks for your continued efforts to illuminate the dark corners of the industry.
Yes, you’re absolutely right that a file can be re-saved for a number of reasons. We explained the methodology throughout the report so that everyone can come to their own conclusions of how robust each of the findings is. As this is not a major finding, nor one will much (if any) impact on the actions of writers, we felt it better to include and explain than remove it.
Fantastic research as always, thank you for your wonderful insight and time spent doing this. I certainly won’t be rushing to meet a competition deadline!
Read the entire report. Your findings were fascinating.
I was intrigued (indeed, disturbed) by the finding that scripts containing much profanity performed better than those with minimal usage. Seems to me this is a reflection of the mores of the readers. No doubt swearing is much more common place in real conversation these days than in the past, but I’m not convinced it’s concomitant with good characterisation. Of course, some genre will deal with characters for which swearing is indeed part of their character ( and I’m thinking here that writers such as Quentin T legitimise this) but at the same time I think there’s a correlation b/t screenwriting and stand-up comedy. So many so-called comedians swear for the sake of it, and convince themselves that the profanity is funny in itself – I am not convinced. Billy Connelly WAS funny when he swore on stage because he was simply reflecting his native Glaswegian culture (where, let’s face it, every second word is “fuck”!).
I have used profanity in one of my scripts – the character, a prim and proper middle-aged socialite retorts “fuck you” when another character suggests she has entered menopause. It’s not the language she would normally use but she makes a point. In a way this debate also applies to the use of gratuitous violence (and indeed sex) in screenplays. I not sure if it was Nunally Johnson or Ring Lardner who once said that the intimation of violence is much more effective than the actual showing of it. As with swearing, sometimes a good actor can convey the same sentiment without any utterance at all. That’s showing respect for the intelligence of the audience.
I suspect the correlation in these scores has less to do with the actual swearing and more to do with how we normally see swearing crop up in a script. Swearing is often used as a crutch (for lack of better term) in emotional moments. So movies with more swearing may have more intense emotion to pull at the reader. But when we get to that filthy category, again it’s likely not the level of swearing that lowers the score but the fact that excessive swearing stems from weak writing and the inability to get the message across any other way.
I don’t know, really, but that’s my thought.
Hi Rae. That’s a great thought! I don’t know if it’s true or not as the data doesn’t give us the signals we would need to test it but it certainly sounds plausible to me.
Stephen, Josh, and Liora, THANK YOU!! Your generosity is spectacular. Thank you so very much for gifting the community with this wonderful tool. Much appreciated.
Thanks, Stephen, great work, really helpful. Much appreciated
Thank you for doing this tremendous work. I found the analysis extremely enlightening.
Thanks for another fascinating analysis, and congratulations on some extraordinary work.
I would like to know whether the “fuck” count included fucks in “Motherfucker”? Which prompts the thought that America might have more motherfucker-ers, while British writers might be more likely to be simple fuckers. I’d also be interested in a transatlantic survey of buggery, to confirm my suspicion that Britain has a higher proportion of buggerers. Are judges more or less biased towards motherfucking or buggery?
Gleeful rudeness and merriment aside, can the data separate British from American contests, and British from American scripts? Are non-Americans at a disadvantage when entering US contests (and vice-versa?). Are films set outside the US less likey to get higher ratings, for some genres at least? (Period drama – possibly not. Crime /thriller /family drama /comedy? Hmm, I’d love to know. Should I relocate my Reservoir Dogs meets Hunger Games story out of Walthamstow?)
If I could only make one point, it would be this: that overall averages for contests don’t really help writers. The value of contest success only comes if your script reaches semi-final / longlist stage or better. My understanding is that 90% of scripts don’t make this level.
So to inform my work, I need to know what’s more likely to work for the screenplays scoring at least 7-8 on your rating scale (or whatever the semifinalist and finalist thresholds actually are).
I feel a bit churlish to say this after so much work has been done, but is there any scope to perform a similar analysis of a smaller sample of the highest-rated screenplays? And cross-reference findings back to the overall averages? Then we’d perhaps get a sense of what matters at the business end of contests.
This is so interesting. I have a question though: Do you have a definition of the words you use in your study as for exaple:
Pacing (you indicate that pacing is different in different genres)
Please define all the 13 words you have been using in this study. Because I understand these words can be used differently and to understand your study I must know how YOU define these words.
I also wonder what you mean with emotional arc. Do you mean the readers emotional arc or the protagonist emotional arc and how did you measure that part?
I also wonder why you have 13 things you looked at overall, but only 11 when it comes to different genres. Style is for example 0,68 overall but is not mentioned in any of the genrers (as I could see). Why not?
I also wonder in which genre you put all the romantic love movies and why it does not have its own genre.
Best regards: Pia Lerigon, swedish author of love novels
Those are standard terms used in the script reports, not ones created by us. It’s likely that some readers will have a slightly different definition of each to their peers. We standardised the scores to counter for particularly generous or harsh reviewers but you’re right that they remain subjective. Sadly that the nature of the role of script reader, and hence why we have focused the report on measuring their views, rather than trying to reach an objective measure of the factors.
The arcs were created via sentiment analysis of each line and then averaged into 2% slices of each script (so 50 per script). There’s more detail on this in the report.
There were not enough scores for all of the factors to include them in the genre breakdowns. I can’t remember offhand what our threshold of significance was for this but again it’s in the footnotes of the report.
Romance wasn’t one of the options the screenwriters were given when self-reporting their primary genre. We actually did some work on determining the levels of love in each project (between primary characters) but we couldn’t get it to work reliably enough to publish.
Hope that helps!
Thanks Stephen, as from Sweden and a novel writer, not a screenwriter, I am not that familiar with the English terms for these things, but I have tried to google the words and can not find a specifik definition of them. If you have a web page or book you could recommend I would gladly study the meaning of these terms.
I have been studying statistics one semester at the University but newer heard of sentiment analysis before, so that one I need to study to fully understand.
Hm … so noone writes romances any longer. Strange. I do 🙂
Maybee they just forgot to tell that it is a romance.
Best regards: Pia
Sorry to disturbe you again but I have now googled the words style, tone and voice and I can not find a definition that is shared among the web pages I have visited.
For example: Some people say that voice is something you as an author can not change. It is like when J.K. Rowling (the author of Harry Potter) used the pseudonym Robert Galbraith and a computer figured out that is was Rowling behind this pseudonym.
Othe people say that you can choose the voice and that it could be formal or personal, which indicate that you actually CAN change your voice.
I have personally always thought of voice as your posture. You can see a person from distans and from behind and still know who it is, just by looking at that persons posture. Even if you try to change your posture it is hard because you get back to your posture when you are not thinking of it. But in the long run, for example when you grow up from a kid to an adult or from an adult to an old lady or gentleman, the posture will gradually change. I think of voice as something similar. You can change it but it is fucking hard and will take you years. It is nothing you can decide to change from one book to another.
So of course I wonder what definition YOU uses.
You also writes in your study this:
The most important factors for Comedies are characterization and plot. Interestingly, the pace of a comedy script has a far weaker connection to its overall score than that of any other genre. In this context, pacing refers to the speed of plot points moving forward, rather than fast-talking characters.
So here you obviously uses two different definitions of pacing. I understood you as you are only using pacing in the definition of speed of plot points moving forward for comedies, is that so? Or did you use that definition in all your study?
Personally I do not use any of these two definitions when I thinking of pacing. For me it is more the length of words and sentenses that makes the reader taking in the text faster or slower – that is the way James V Smith JR explains it in his book The Writers little helper. So I use his system to measure the changes pacing in my own writing.
I do have studied a lot of American litterature of screen writing as for example Syd Field and I still never found a definition of these terms that are exactly the same in all books.
So I would be very glad if you could help me ot sort out what definition YOU have used in your study.
Best regards: Pia
No problem at all – all discussion is good!
It’s tricky to find literal translations for all of these words into other languages. There is a bias toward the English-language scripts because that’s the language all the reports were conducted it, meaning that there is a certain amount of cultural understanding in the subtext.
Here is the guidance given to the readers.
Your wider point about differing interpretations is spot on. So much of the creative process is subjective and personal, and the nature of script readers no less so.
Thanks Stephen for your answer <3
Best regards: Pia
Oh my God, this is amazing! This is true screenwriting theory science. Thank you for putting this together guys. It’s really helpful.
Wow. Thanks a lot. I’ve downloaded the report and will study it carefully.
I have serious data envy. I was wanting to do a very similar analysis a couple of months ago but ran up against the problem of getting access the script reader’s data.
So thank you for doing this and presenting this so clearly both in this summary and the PDF.
Will you be offering the raw data or ranges for your data at all?
First, thank you for the report. This was extremely helpful and informative!
Did you happen to calculate the standard deviations for the genre average review scores that appear on page 10? I’m curious as to what score one would need in each genre to be in the top quartile, top 10%, etc., of each genre.
Interesting insights, confirmed by my recent visit to a multiplex to suffer in silence through a truly terrible screening.
All of the “Coming Attractions” ads (about twelve, although I stopped counting well before it got that high) promoted new versions of movies released ten, twenty, even thirty years ago. Clones. Remakes. Sequels. Prequels. Or, to be charitable, “re-imaginings”. No attempt at originality or path-breaking — veering off in new directions, breaking new ground. This despite the fact that many of the movies now being regurgitated were bold path-breakers when they were released.
Thank you for your time and dedication to methodically put together such a analytical masterpiece! Much blessings!!!
Excellent data analysis. Very helpful.
I am just getting down to putting our full musical through Final Draft. Assuming FD is industry standard there too…although haven’t found info to confirm. RWT
Hello, thank you for the wonderful and insightful study. The link does not appear to be working. Could you kindly advise how I may secure a copy?
thank you very much for sharing such a valuable information to us.
Amazing research man, thank you. Extremely helpful.
Discussing software is misleading, as movies have to be in Final Draft for technical reasons. They may start life in pencil and paper or InDesign, Word, or any other software, but with the need to update scripts during shooting and prep, everybody’s script has to update automatically and simultaneously. That means that everybody from the sound designer to the editor, from wardrobe to ADR has to be working from the same script and using the same software.
Thank you Stephen for compiling this information. I’ve read and analyzed several thousand screenplays in my career and many of those were before the ‘rating numbers’ became the major way of deciding whether a script got picked up or not. I was/am a script reader more in the Canadian industry rather than the US. The numbers system makes it much easier to assess screenplays, given almost everyone at one time or another seems to take a stab at writing them, thinking it’s a quick money making scheme (it’s not, of course). I prefer the more intensive report-writing way of judging myself, and it is quite possible some very good potential films get tossed in the bin using the numbers system.
Stephen. A well-written article. I’m a freelance screenplay competition reader/score/analyst/feedback-notes/judge for a good few companies. A talented screenplay writer too… hence I’m a career analyst. So true to what you said in that the competition reader is the gatekeeper, in that he/she is the first to review/read/score the script and will decide if it moves on to a JUDGE? Or, trash it/pass! When I started in this business, I approached many competition companies to get my foot into the business – become the established pro-reader I now am. Now here is the scary part… in starting off, the small, non-well-known competition companies took me on as a non-paying reader without even asking me for my profile/bio/CV, or even a short script I’d written – NOTHING! Hell, I’m free labour! Shocking! Stephen, if you have a mail/follower service? keep me posted, as I really like your approach and wisdom. Regards, Barry.
Hi, is there any chance of the raw data/review scores being made available? (I’d love to have a look at it as part of my own research)