-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Evaluation Criteria to Match OCS Priorities #20
Comments
If it were up to me, I would lower the cost weighting to 20%. The vendors know that the budget for this is $300K. If we put the cost weighting hire than 20% it might lead to a "buying in" situation where a vendor submits a low bid just to get an award. The tech is more important than the cost, in my opinion. |
@DanaPenner @susanjabal let me know if you want to discuss. Ultimately it's your call on this one. |
Discussed and approved to go ahead |
@DanaPenner @randyhart @mheadd |
Looks good to me, @susanjabal |
Looks good. |
We had no idea of how to score verbal presentation on our last DHSS procurement, so if we're going to include that as a separate factor, I think we have to agree on what we're measuring with it. With the interview, we were asking questions about their proposal, and if they told us something about their staffing plan that was a red flag, it only made sense that it would impact their staffing plan score, not their verbal presentation. Ultimately, we didn't know what we were scoring. How well-spoken they were? That resulted in our retrospective citing the problem, in which we concluded that it didn't make sense to have a separate score for the verbals. So if we're going to score verbals separately, I think we'll want to figure out what it is that we're evaluating, if not all of the other things. |
You're right, @waldoj - the interviews were intended to be free form. So, we didn't go in with a strong scoring metric/approach. If we need something there, it might make sense to develop something based on things we'd like to see (or are concerned about) from the interview. Some things that come to mind after our DPA experience include:
Those are just some ideas. |
I think it might make sense to consider the two things in the context of a job interview. Our evaluation criteria are similar to the "qualifications" you put out with the job posting. The initial written proposal is like the resume and/or application from all of the applicants. We score these based on the evaluation criteria to make an initial cut of who the most qualified applicants are that should be asked to the job interview. When they come in for the job interview, we are still interested in the same evaluation criteria. The interview is just another forum to have some due diligence before making the selection. After the interview, the "points" from the evaluation of the written proposal should be updated to account for any new information that comes from the one-on-one interaction. Does that analogy make sense? |
@randyhart I think the job interview analogy suggests that we have a single set of evaluation criteria and we score them initially, based on their written submission, and then a second time in a verbal interview. I do like have a place to score distinctly for the interview, rather than just changing points in the pre-interview scoring categories. I think there was a sense in the DPA solicitation that maybe that's all we were doing, i.e., just going back and updating our scores in other areas. Then, I guess the question is: If we're only updating other scores, what is the point of separately scoring the interview? We went into the DPA solicitation interviews with an approach that was open ended and probably focused on details that were covered in our other scoring categories. What I noticed is that where their answers really mattered, it was because they were able to move the line by either succeeding or failing - often times somewhat dramatically - at answering our questions. It wasn't just that they had a better or worse answer - in terms of the content - it was also that they were able to respond in the moment in a way that, for example, demonstrated team strength (or weakness), or their grounded experience (or lack thereof) in agile, etc. I am advocating that there are other dimensions we should assess in the interview that don't fit in the written proposal. So, we should use the interview to improve our assessment of other scoring categories, and to assess those other "verbal only" dimensions. The ideas I put down above are a fair representation of what those other dimensions might be, though I was a little terse. I also like giving 20% of the score explicitly to the interview because it conveys to the offeror that they have to be serious about that interview. I'm not too attached either way, though I really think the process we used for DPA served us reasonably well. Sorry I ran on there a bit....trying to respond between meetings :-/ |
Good morning everyone - I am back from a week away and working on these RFP issues today. All of the input provided above is valid and makes sense - we could go either way on evaluating the verbal presentation separately or integrating it. How's this: As suggested above - the resultant weights would be: thoughts? |
@susanjabal @randyhart Once we resolve issue #16, I think we can finalize this one as well. |
@sandralee19 Adding you to this issue. |
The criteria in Section 5 of the draft will need to be updated to reflect OCS priorities.
We should also discuss evaluation % for cost. Requirement is 40%; however, as you see below (20%) – we can request a waiver if we can justify why.
Any changes should also be reflected in the table of contents.
Also, the cost proposal exhibit may need to be updated based on this discussion. In the exhibit, the total cost figure used is as follows:
This may need to be altered based on team discussions.
The text was updated successfully, but these errors were encountered: