Rosalind Community Support

Home » Community Forum » Ideas » Introduce difficulty levels / efficiency points
Search
Search this community...
Share a Feedback

Community Forum

"Champions" Idea | Implemented
Sign in with Facebook Idea | Accepted
Rate upcoming features Question | Pending

Knowledge Base

Signin into
Rosalind Community.

Signin with

Your email address
UserRules Password
Login

Introduce difficulty levels / efficiency points

Follow
Vote
6
Unfollow
Some of the problems are quite easy to bruteforce on a modern PC, even with quite inefficient algorithm / implementation. However, most of these algos won't scale well (while being sufficient for the given test dataset). It would be interesting to get additional - larger - datasets for the same problem and the same time limit, so you'll need to have an efficient solution. It is probably better to keep this score separate (or for badges only) as to not penalize people with really ancient hardware who will need efficient algorithm even to pass the base case.

For instance, to complete a problem you solve for 10kbp dataset. In order to get efficiency points you also solve for 1000kbp dataset - this will give you efficiency point which can figure in badge award calculation, or even have a separate ranking. In can be expanded to levels: solve for 10kbp - pass with 0 efficiency points, solve for 100kbp - pass with 1 efficiency point, solve for 1000kbp - pass with 2 efficiency point etc. The size will need to be tuned up such as larger datasets would be solvable by algorithm / implementation improvements only, not simply because someone has more powerful hardware (i.e. datasets will need to be substantially bigger). People with access to clusters / other number-crunching units will have unfair advantage, but that always was the case.
from Victor · 4 years & 341 days ago · · 1 answer & 2 comments
Good idea, thank you, Victor!
from Rosalind · 4 years & 337 days ago
Helpful Answer
You raise a very good point. There is another small issue with the current problem statements (at least the problems I have solved so far): very conservative, explicit upper limits to the size of the problems.

The issue is, on a modern computer, using any high-level language, it should be very easy to implement a solution that can deal with "arbitrary" size input (and by arbitrary I mean "it can fit in the main memory"). Having explicit size limits in the problem description almost invites solutions that either don't at all consider either A) memory and time efficiency; and B) the possibility of buffer overflow with bigger input.

I know that this is not a correctness issue, but in practice (and with real-world problems) not taking care of these problems will come to bite you in the a** eventually.
from anonymous · 4 years & 332 days ago · Flag as inappropriate
Good Comment
I like the size limit specifications because it fits my experience (i.e. with real-world problems) better, where time spent writing code matters -- it's not time-efficient to over-optimise a solution because I'm frequently writing code for things that haven't been done before. For example, there's no need to spend the extra time modifying some code to handle INDELs in 3Gbp sequences if you're only working with 16kbp mitochondrial sequences. I've noticed that some of the earlier problems in Rosalind start off with small inputs, but the later problems have larger inputs which are easier for me to handle as I get into the swing of that particular problem category.
from anonymous · 4 years & 329 days ago · Flag as inappropriate
Good Comment
Write a comment... Comment

Similar Feedback

Using badges to identify problem topic
4 years & 316 days ago · Accepted
Forum for sharing ideas?
4 years & 339 days ago · Implemented
One in a Hundred Badge
4 years & 351 days ago · Resolved
LING dataset
4 years & 342 days ago · Resolved
bring back task-based badges (1 in 100, etc)
4 years & 285 days ago · Waiting

Followers 

Rosalind · Community Support for Rosalind · Powered by UserRules · Terms