Why does this O(n^2) code execute faster than O(n)? [duplicate]











up vote
26
down vote

favorite
7













This question already has an answer here:




  • Can an O(n) algorithm ever exceed O(n^2) in terms of computation time?

    6 answers




I have written code for two approaches to find out the first unique character in a string on LeetCode.




Problem Statement:
Given a string, find the first non-repeating
character in it and return it's index. If it doesn't exist, return -1.



Sample Test Cases:



s = "leetcode" return 0.



s = "loveleetcode", return 2.




Approach 1 (O(n)) (correct me if I am wrong):



class Solution {
public int firstUniqChar(String s) {

HashMap<Character,Integer> charHash = new HashMap<>();

int res = -1;

for (int i = 0; i < s.length(); i++) {

Integer count = charHash.get(s.charAt(i));

if (count == null){
charHash.put(s.charAt(i),1);
}
else {
charHash.put(s.charAt(i),count + 1);
}
}

for (int i = 0; i < s.length(); i++) {

if (charHash.get(s.charAt(i)) == 1) {
res = i;
break;
}
}

return res;
}
}


Approach 2 (O(n^2)):



class Solution {
public int firstUniqChar(String s) {

char a = s.toCharArray();
int res = -1;

for(int i=0; i<a.length;i++){
if(s.indexOf(a[i])==s.lastIndexOf(a[i])) {
res = i;
break;
}
}
return res;
}
}


In approach 2, I think the complexity should be O(n^2) since indexOf executes in O(n*1) here.



But when I execute both the solutions on LeetCode I get runtime 19 ms for approach 2 and 92 ms for approach 1. I am confused; why does that happen?



I suppose LeetCode tests for both small and large input values for best, worst and average cases.



Update:



I am aware of the fact that O(n^2 algorithms) can perform better for certain n < n1. In this question I wanted to understand why that is happening in this case. i.e. which part of Approach 1 makes it slower.



LeetCode link to the question










share|improve this question















marked as duplicate by Mike 'Pomax' Kamermans, BJ Myers, Michael java
Users with the  java badge can single-handedly close java questions as duplicates and reopen them as needed.

StackExchange.ready(function() {
if (StackExchange.options.isMobile) return;

$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() {
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');

$hover.hover(
function() {
$hover.showInfoMessage('', {
messageElement: $msg.clone().show(),
transient: false,
position: { my: 'bottom left', at: 'top center', offsetTop: -7 },
dismissable: false,
relativeToBody: true
});
},
function() {
StackExchange.helpers.removeMessages();
}
);
});
});
Nov 19 at 9:55


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.











  • 3




    What's your reasoning the second solution is n^2? Both indexOfand lastIndexOf iterate the string at most once, so total complexity is 2*n for string length n. n^2 would mean you iterate the string once for each of its characters.
    – daniu
    Nov 17 at 22:56








  • 3




    @daniu and thats' what he's doing: for(int i=0; i<a.length;i++){
    – JB Nizet
    Nov 17 at 23:00






  • 1




    @Nivedita note that s.indexOf(a[i]) is... i.
    – JB Nizet
    Nov 17 at 23:02








  • 4




    @NathanHughes For input = "cc", if I do not use indexOf, the code will return 1, while the expected answer is -1 since the string has no unique characters.
    – Nivedita
    Nov 17 at 23:08






  • 2




    O() is a measure of how well an algorithm scales, not how fast it is.
    – ikegami
    Nov 18 at 20:05















up vote
26
down vote

favorite
7













This question already has an answer here:




  • Can an O(n) algorithm ever exceed O(n^2) in terms of computation time?

    6 answers




I have written code for two approaches to find out the first unique character in a string on LeetCode.




Problem Statement:
Given a string, find the first non-repeating
character in it and return it's index. If it doesn't exist, return -1.



Sample Test Cases:



s = "leetcode" return 0.



s = "loveleetcode", return 2.




Approach 1 (O(n)) (correct me if I am wrong):



class Solution {
public int firstUniqChar(String s) {

HashMap<Character,Integer> charHash = new HashMap<>();

int res = -1;

for (int i = 0; i < s.length(); i++) {

Integer count = charHash.get(s.charAt(i));

if (count == null){
charHash.put(s.charAt(i),1);
}
else {
charHash.put(s.charAt(i),count + 1);
}
}

for (int i = 0; i < s.length(); i++) {

if (charHash.get(s.charAt(i)) == 1) {
res = i;
break;
}
}

return res;
}
}


Approach 2 (O(n^2)):



class Solution {
public int firstUniqChar(String s) {

char a = s.toCharArray();
int res = -1;

for(int i=0; i<a.length;i++){
if(s.indexOf(a[i])==s.lastIndexOf(a[i])) {
res = i;
break;
}
}
return res;
}
}


In approach 2, I think the complexity should be O(n^2) since indexOf executes in O(n*1) here.



But when I execute both the solutions on LeetCode I get runtime 19 ms for approach 2 and 92 ms for approach 1. I am confused; why does that happen?



I suppose LeetCode tests for both small and large input values for best, worst and average cases.



Update:



I am aware of the fact that O(n^2 algorithms) can perform better for certain n < n1. In this question I wanted to understand why that is happening in this case. i.e. which part of Approach 1 makes it slower.



LeetCode link to the question










share|improve this question















marked as duplicate by Mike 'Pomax' Kamermans, BJ Myers, Michael java
Users with the  java badge can single-handedly close java questions as duplicates and reopen them as needed.

StackExchange.ready(function() {
if (StackExchange.options.isMobile) return;

$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() {
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');

$hover.hover(
function() {
$hover.showInfoMessage('', {
messageElement: $msg.clone().show(),
transient: false,
position: { my: 'bottom left', at: 'top center', offsetTop: -7 },
dismissable: false,
relativeToBody: true
});
},
function() {
StackExchange.helpers.removeMessages();
}
);
});
});
Nov 19 at 9:55


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.











  • 3




    What's your reasoning the second solution is n^2? Both indexOfand lastIndexOf iterate the string at most once, so total complexity is 2*n for string length n. n^2 would mean you iterate the string once for each of its characters.
    – daniu
    Nov 17 at 22:56








  • 3




    @daniu and thats' what he's doing: for(int i=0; i<a.length;i++){
    – JB Nizet
    Nov 17 at 23:00






  • 1




    @Nivedita note that s.indexOf(a[i]) is... i.
    – JB Nizet
    Nov 17 at 23:02








  • 4




    @NathanHughes For input = "cc", if I do not use indexOf, the code will return 1, while the expected answer is -1 since the string has no unique characters.
    – Nivedita
    Nov 17 at 23:08






  • 2




    O() is a measure of how well an algorithm scales, not how fast it is.
    – ikegami
    Nov 18 at 20:05













up vote
26
down vote

favorite
7









up vote
26
down vote

favorite
7






7






This question already has an answer here:




  • Can an O(n) algorithm ever exceed O(n^2) in terms of computation time?

    6 answers




I have written code for two approaches to find out the first unique character in a string on LeetCode.




Problem Statement:
Given a string, find the first non-repeating
character in it and return it's index. If it doesn't exist, return -1.



Sample Test Cases:



s = "leetcode" return 0.



s = "loveleetcode", return 2.




Approach 1 (O(n)) (correct me if I am wrong):



class Solution {
public int firstUniqChar(String s) {

HashMap<Character,Integer> charHash = new HashMap<>();

int res = -1;

for (int i = 0; i < s.length(); i++) {

Integer count = charHash.get(s.charAt(i));

if (count == null){
charHash.put(s.charAt(i),1);
}
else {
charHash.put(s.charAt(i),count + 1);
}
}

for (int i = 0; i < s.length(); i++) {

if (charHash.get(s.charAt(i)) == 1) {
res = i;
break;
}
}

return res;
}
}


Approach 2 (O(n^2)):



class Solution {
public int firstUniqChar(String s) {

char a = s.toCharArray();
int res = -1;

for(int i=0; i<a.length;i++){
if(s.indexOf(a[i])==s.lastIndexOf(a[i])) {
res = i;
break;
}
}
return res;
}
}


In approach 2, I think the complexity should be O(n^2) since indexOf executes in O(n*1) here.



But when I execute both the solutions on LeetCode I get runtime 19 ms for approach 2 and 92 ms for approach 1. I am confused; why does that happen?



I suppose LeetCode tests for both small and large input values for best, worst and average cases.



Update:



I am aware of the fact that O(n^2 algorithms) can perform better for certain n < n1. In this question I wanted to understand why that is happening in this case. i.e. which part of Approach 1 makes it slower.



LeetCode link to the question










share|improve this question
















This question already has an answer here:




  • Can an O(n) algorithm ever exceed O(n^2) in terms of computation time?

    6 answers




I have written code for two approaches to find out the first unique character in a string on LeetCode.




Problem Statement:
Given a string, find the first non-repeating
character in it and return it's index. If it doesn't exist, return -1.



Sample Test Cases:



s = "leetcode" return 0.



s = "loveleetcode", return 2.




Approach 1 (O(n)) (correct me if I am wrong):



class Solution {
public int firstUniqChar(String s) {

HashMap<Character,Integer> charHash = new HashMap<>();

int res = -1;

for (int i = 0; i < s.length(); i++) {

Integer count = charHash.get(s.charAt(i));

if (count == null){
charHash.put(s.charAt(i),1);
}
else {
charHash.put(s.charAt(i),count + 1);
}
}

for (int i = 0; i < s.length(); i++) {

if (charHash.get(s.charAt(i)) == 1) {
res = i;
break;
}
}

return res;
}
}


Approach 2 (O(n^2)):



class Solution {
public int firstUniqChar(String s) {

char a = s.toCharArray();
int res = -1;

for(int i=0; i<a.length;i++){
if(s.indexOf(a[i])==s.lastIndexOf(a[i])) {
res = i;
break;
}
}
return res;
}
}


In approach 2, I think the complexity should be O(n^2) since indexOf executes in O(n*1) here.



But when I execute both the solutions on LeetCode I get runtime 19 ms for approach 2 and 92 ms for approach 1. I am confused; why does that happen?



I suppose LeetCode tests for both small and large input values for best, worst and average cases.



Update:



I am aware of the fact that O(n^2 algorithms) can perform better for certain n < n1. In this question I wanted to understand why that is happening in this case. i.e. which part of Approach 1 makes it slower.



LeetCode link to the question





This question already has an answer here:




  • Can an O(n) algorithm ever exceed O(n^2) in terms of computation time?

    6 answers








java time-complexity big-o






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 19 at 4:38









Peter Mortensen

13.3k1983111




13.3k1983111










asked Nov 17 at 22:43









Nivedita

1,0961335




1,0961335




marked as duplicate by Mike 'Pomax' Kamermans, BJ Myers, Michael java
Users with the  java badge can single-handedly close java questions as duplicates and reopen them as needed.

StackExchange.ready(function() {
if (StackExchange.options.isMobile) return;

$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() {
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');

$hover.hover(
function() {
$hover.showInfoMessage('', {
messageElement: $msg.clone().show(),
transient: false,
position: { my: 'bottom left', at: 'top center', offsetTop: -7 },
dismissable: false,
relativeToBody: true
});
},
function() {
StackExchange.helpers.removeMessages();
}
);
});
});
Nov 19 at 9:55


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.






marked as duplicate by Mike 'Pomax' Kamermans, BJ Myers, Michael java
Users with the  java badge can single-handedly close java questions as duplicates and reopen them as needed.

StackExchange.ready(function() {
if (StackExchange.options.isMobile) return;

$('.dupe-hammer-message-hover:not(.hover-bound)').each(function() {
var $hover = $(this).addClass('hover-bound'),
$msg = $hover.siblings('.dupe-hammer-message');

$hover.hover(
function() {
$hover.showInfoMessage('', {
messageElement: $msg.clone().show(),
transient: false,
position: { my: 'bottom left', at: 'top center', offsetTop: -7 },
dismissable: false,
relativeToBody: true
});
},
function() {
StackExchange.helpers.removeMessages();
}
);
});
});
Nov 19 at 9:55


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.










  • 3




    What's your reasoning the second solution is n^2? Both indexOfand lastIndexOf iterate the string at most once, so total complexity is 2*n for string length n. n^2 would mean you iterate the string once for each of its characters.
    – daniu
    Nov 17 at 22:56








  • 3




    @daniu and thats' what he's doing: for(int i=0; i<a.length;i++){
    – JB Nizet
    Nov 17 at 23:00






  • 1




    @Nivedita note that s.indexOf(a[i]) is... i.
    – JB Nizet
    Nov 17 at 23:02








  • 4




    @NathanHughes For input = "cc", if I do not use indexOf, the code will return 1, while the expected answer is -1 since the string has no unique characters.
    – Nivedita
    Nov 17 at 23:08






  • 2




    O() is a measure of how well an algorithm scales, not how fast it is.
    – ikegami
    Nov 18 at 20:05














  • 3




    What's your reasoning the second solution is n^2? Both indexOfand lastIndexOf iterate the string at most once, so total complexity is 2*n for string length n. n^2 would mean you iterate the string once for each of its characters.
    – daniu
    Nov 17 at 22:56








  • 3




    @daniu and thats' what he's doing: for(int i=0; i<a.length;i++){
    – JB Nizet
    Nov 17 at 23:00






  • 1




    @Nivedita note that s.indexOf(a[i]) is... i.
    – JB Nizet
    Nov 17 at 23:02








  • 4




    @NathanHughes For input = "cc", if I do not use indexOf, the code will return 1, while the expected answer is -1 since the string has no unique characters.
    – Nivedita
    Nov 17 at 23:08






  • 2




    O() is a measure of how well an algorithm scales, not how fast it is.
    – ikegami
    Nov 18 at 20:05








3




3




What's your reasoning the second solution is n^2? Both indexOfand lastIndexOf iterate the string at most once, so total complexity is 2*n for string length n. n^2 would mean you iterate the string once for each of its characters.
– daniu
Nov 17 at 22:56






What's your reasoning the second solution is n^2? Both indexOfand lastIndexOf iterate the string at most once, so total complexity is 2*n for string length n. n^2 would mean you iterate the string once for each of its characters.
– daniu
Nov 17 at 22:56






3




3




@daniu and thats' what he's doing: for(int i=0; i<a.length;i++){
– JB Nizet
Nov 17 at 23:00




@daniu and thats' what he's doing: for(int i=0; i<a.length;i++){
– JB Nizet
Nov 17 at 23:00




1




1




@Nivedita note that s.indexOf(a[i]) is... i.
– JB Nizet
Nov 17 at 23:02






@Nivedita note that s.indexOf(a[i]) is... i.
– JB Nizet
Nov 17 at 23:02






4




4




@NathanHughes For input = "cc", if I do not use indexOf, the code will return 1, while the expected answer is -1 since the string has no unique characters.
– Nivedita
Nov 17 at 23:08




@NathanHughes For input = "cc", if I do not use indexOf, the code will return 1, while the expected answer is -1 since the string has no unique characters.
– Nivedita
Nov 17 at 23:08




2




2




O() is a measure of how well an algorithm scales, not how fast it is.
– ikegami
Nov 18 at 20:05




O() is a measure of how well an algorithm scales, not how fast it is.
– ikegami
Nov 18 at 20:05












7 Answers
7






active

oldest

votes

















up vote
86
down vote













Consider:




  • f1(n) = n2

  • f2(n) = n + 1000


Clearly f1 is O(n2) and f2 is O(n). For a small input (say, n=5), we have f1(n) = 25 but f2(n) > 1000.



Just because one function (or time complexity) is O(n) and another is O(n2) doesn't mean that the former is smaller for all values of n, just that there is some n beyond which this will be the case.






share|improve this answer





















  • Arguing that n is too small can be misleading. Even with larger n, if the strings are still random, (according to my answer) solution 2 would still perform better. In the worst case for n (string size) as small as 30000 (leetcode tests with much larger n!) the second approach is already about 50-100 times worse.
    – user202729
    Nov 19 at 5:35




















up vote
37
down vote













For very short strings e.g. single character the cost of creating HashMap, re-sizing it, looking up entries while boxing and unboxing of char into Character might overshadow the cost of String.indexOf(), which is probably considered hot and inlined by JVM either way.



Another reason might be the cost of RAM access. With additional HashMap, Character and Integer objects involved in a lookup there might be need for additional access to and from RAM. Single access is ~100 ns and this can add up.



Take a look at Bjarne Stroustrup: Why you should avoid Linked Lists. This lecture illustrates that performance is not the same as complexity and memory access can be a killer for an algorithm.






share|improve this answer























  • So, will it be correct to say that - theoretically approach 1 is faster but practically approach 2 has a better performance?
    – Nivedita
    Nov 17 at 23:03










  • @Nivedita for some cases, yes. Theoretical approach is not really faster, it just has a better complexity which is a different measure.
    – Karol Dowbecki
    Nov 17 at 23:04








  • 2




    @Nivedita - What you said is not true in general.
    – Stephen C
    Nov 17 at 23:16






  • 2




    Of course, if you're doing a Cracking the coding interview style question, N is always infinitely large ;). I'd be impressed with a candidate who, without prompting, told me how theory doesn't quite align with reality.
    – marko
    Nov 17 at 23:31






  • 6




    @Nivedita you're missing the point: O(n) is based on what happens as n gets large. Approach 1 isn't just faster in theory, approach 1 won't cause your application to hang for larger values of n then approach 2. Always, always, always think "how will this break? what can go wrong?". Never, never, never focus on the happy path.
    – Jared Smith
    Nov 18 at 3:02




















up vote
16
down vote













Big O notation is a theoretical measure of the manner in which an algorithm scales in terms of memory consumption or computational time with N - the number of elements or dominant operations, and always as N->Infinity.



In practice, N in your example is fairly small. Whilst adding an element to a hash-table is generally considered to be amortised O(1), it might also result in a memory allocation (again, depending on the the design of your hash-table). This may not be O(1) - and may also result in the process making a system call to the kernel for another page.



Taking the O(n^2) solution - the string in a will quickly find itself in cache and will likely run uninterrupted. The cost of a single memory allocation will likely be higher than the pair of nested loops.



In practice with modern CPU architectures, where reads form cache are orders of magnitude faster than those from main memory, N will be quite large before using a theoretically optimal algorithm outperforms a linear data structure and linear searching. Binary trees are particularly bad news for cache efficiency.



[Edit] it's Java: The hash table contains references to boxed java.lang.Character object. Every single addition will result in a memory allocation






share|improve this answer























  • "Every single addition will result in a memory allocation" -- yes, but note that Java virtual machines are usually highly optimized around the fact that most Java programs perform a lot of memory allocations. Java uses extremely fast allocation techniques with a trade-off of needing more complex reclamation of memory along with the hope that the latter won't be needed if the allocation pattern is simple enough or if the program is short-lived. The problem here is small and self-contained, so the GC should be able to deal with it quickly, too. Cache locality is almost certainly the key here.
    – Jules
    Nov 19 at 3:31












  • Boxing to Character does not necessarily lead to an allocation, as all values in the ASCII range are cached. But the HashMap itself creates an entry instance for every association.
    – Holger
    Nov 19 at 10:15




















up vote
11
down vote













O(n2) is only the worst case time complexity of the second approach.



For strings such as bbbbbb...bbbbbbbbbaaaaaaaaaaa...aaaaaaaaaaa where there are x b's and x a's, each loop iteration takes about x steps to determine the index, hence the total performed steps is about 2x2. For x about 30000, it would take about 1~2 second(s), while the other solution would perform much better.



On Try it online, this benchmark computes that approach 2 is about 50 times slower than approach 1 for the string above. For larger x, the difference is even larger (approach 1 takes about 0.01 second, approach 2 takes a few seconds)



However:



For strings with each character chosen independently, uniformly from {a,b,c,...,z} [1], the expected time complexity should be O(n).



This is true assuming Java uses the naive string searching algorithm, which searches the character one-by-one until a match is found, then immediately returns. The time complexity of the search is the number of considered characters.



It can be easily proven (the proof is similar to this Math.SE post - Expected value of the number of flips until the first head) that the expected position of a particular character in a uniform independent string over the alphabet {a,b,c,...,z} is O(1). Therefore, each indexOf and lastIndexOf call runs in expected O(1) time, and the whole algorithm takes expected O(n) time.



[1]: In the original leetcode challenge, it is said that




You may assume the string contain only lowercase letters.




However, that is not mentioned in the question.






share|improve this answer























  • The problem with this analysis as far as I can see is that you limit the alphabet size to 26 characters. If we don't do this, I would assume that the length of the alphabet in comparison to the length of the string also influences the complexity.
    – Voo
    Nov 18 at 11:56










  • @voo In the leetcode example it's true. (although, unfortunately, it is not mentioned in the question)
    – user202729
    Nov 18 at 11:57










  • For all practical purposes the alphabet is probably going to be <100 absolutely and if the question even specifies it, even better. I just overlooked your set specification on first glance and was really confused how the statement could hold for arbitrary large character sets (although char is limited to 65k anyhow) and played around with it a bit.
    – Voo
    Nov 18 at 12:00


















up vote
3
down vote













Karol already provided a good explanation for your special case. I want to add a general remark regarding the big O notation for time complexity.



In general this time complexity doesn't tell you too much about the actual performance. It just gives you an idea of the number of iterations needed by a particular algorithm.



Let me put it like this: if you execute a huge amount of fast iterations this can still be faster than executing very few extremely slow iterations.






share|improve this answer




























    up vote
    2
    down vote













    First, complexity analysis does not tell you an awful lot. It used to tell you how algorithms would -- in theory -- compare as the size of the problem grows to large numbers (towards infinity, if you will), and to some extent it still does.

    However, complexity analysis makes assumptions that were only half-true some 30-40 years ago and are not in any way true nowadays (such as e.g. all ops are the same, all accesses are the same). We live in a world in which constant factors are huge, and not all operations are the same, not even remotely. Insofar, it has to be considered with very great care, in no case can you assume "this is O(N) so it will be faster". That's a huge fallacy.



    For small numbers, looking at "big O" is mostly pointless, but even for large numbers, be aware that the constant factor can play a huge, dominating role. No, the constant factor is not zero, and it is not negligible. Do not ever assume that.

    The theoretically super awesome algorithm which, for example, finds something in a billion elements with only 20 accesses can be much slower than a "bad" algorithm that takes 200,000 accesses -- if in the first case each of the 20 accesses causes a page fault with a disk seek (each of which is worth some hundred million operations). Theory and practice do not always go hand in hand here.



    Second, despite being idiomatic and generally looking like a good idea (it's O(1), eh?), using a hash map is just bad in many cases. Not in every case, but this is such a case. Compare what the two code snippets are doing.



    The O(N2) one converts a moderately small string to a character array once (which basically costs zero) and then repeatedly accesses that array in a linear fashion. Which is pretty much the fastest thing a computer is able to do, even in Java. Yes, Java is agnostic of any such thing as memory or caches, but that cannot change the fact that these things exist. Locally accessing small/moderate amounts of data in a mostly linear fashion is fast.



    The other snippet inserts characters into a hashmap, allocating a data structure for every character. Yes, dynamic allocations in Java are not that expensive, but still, allocations are nowhere near free, and memory accesses become non-contiguous.

    Then, a hash function is calculated. This is something that is often overlooked with hash maps. For a single character, this is (hopefully) a cheap operation, but it is nowhere near free[1]. Then, the data structure is in some way inserted into a bucket (which is technically nothing but another non-coherent memory access). Now, there's a fair chance of a collision, in which case something else must be done (chaining, rehashing, whatever).

    Later, values are being read from the hashmap again, which again involves calling the hash function, looking up the bucket, possibly traversing a list, and doing a comparison at each node (this is necessary due to the possibility of collisions).



    Every operation thus involves at least two indirections, plus some calculations. That, all in all, is painfully expensive compared to just iterating over a small array a couple of times.






    [1] Not an issue here for single-character keys but still, fun fact: People often talk of hash maps in terms of O(1) which already isn't true with e.g. chaining, but are then surprised that actually hashing the key is O(N) in respect of the length of the key. Which may very well be noticeable.




    share|improve this answer






























      up vote
      0
      down vote













      I've ported the functions to C++(17) to see if the difference was caused by algorithm complexity or Java.



      #include <map>
      #include <string_view>
      int first_unique_char(char s, int s_len) noexcept {
      std::map<char, int> char_hash;
      int res = -1;
      for (int i = 0; i < s_len; i++) {
      char c = s[i];
      auto r = char_hash.find(c);
      if (r == char_hash.end())
      char_hash.insert(std::pair<char, int>(c,1));
      else {
      int new_val = r->second + 1;
      char_hash.erase(c);
      char_hash.insert(std::pair<char, int>(c, new_val));
      }
      }
      for (int i = 0; i < s_len; i++)
      if (char_hash.find(s[i])->second == 1) {
      res = i;
      break;
      }
      return res;
      }
      int first_unique_char2(char s, int s_len) noexcept {
      int res = -1;
      std::string_view str = std::string_view(s, s_len);
      for (int i = 0; i < s_len; i++) {
      char c = s[i];
      if (str.find_first_of(c) == str.find_last_of(c)) {
      res = i;
      break;
      }
      }
      return res;
      }


      The result was:




      The second one is ~30% faster for leetcode.




      Later, I noticed that



          if (r == char_hash.end())
      char_hash.insert(std::pair<char, int>(c,1));
      else {
      int new_val = r->second + 1;
      char_hash.erase(c);
      char_hash.insert(std::pair<char, int>(c, new_val));
      }


      could be optimized to



          char_hash.try_emplace(c, 1);


      Which also confirms that complexity is not only the only thing. There's "input length", which other answers have covered and lastly, I noticed that




      Implementation also makes a difference. Longer code hides optimization opportunities.







      share|improve this answer



















      • 2




        This isn't the same thing as the equivalent of HashMap would rather be std::unordered_map. std::map would be a TreeMap AFAIK.
        – Moira
        Nov 18 at 14:12












      • ^, std::map performance is poor compared to umap for these things.
        – Sombrero Chicken
        Nov 18 at 17:07


















      7 Answers
      7






      active

      oldest

      votes








      7 Answers
      7






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes








      up vote
      86
      down vote













      Consider:




      • f1(n) = n2

      • f2(n) = n + 1000


      Clearly f1 is O(n2) and f2 is O(n). For a small input (say, n=5), we have f1(n) = 25 but f2(n) > 1000.



      Just because one function (or time complexity) is O(n) and another is O(n2) doesn't mean that the former is smaller for all values of n, just that there is some n beyond which this will be the case.






      share|improve this answer





















      • Arguing that n is too small can be misleading. Even with larger n, if the strings are still random, (according to my answer) solution 2 would still perform better. In the worst case for n (string size) as small as 30000 (leetcode tests with much larger n!) the second approach is already about 50-100 times worse.
        – user202729
        Nov 19 at 5:35

















      up vote
      86
      down vote













      Consider:




      • f1(n) = n2

      • f2(n) = n + 1000


      Clearly f1 is O(n2) and f2 is O(n). For a small input (say, n=5), we have f1(n) = 25 but f2(n) > 1000.



      Just because one function (or time complexity) is O(n) and another is O(n2) doesn't mean that the former is smaller for all values of n, just that there is some n beyond which this will be the case.






      share|improve this answer





















      • Arguing that n is too small can be misleading. Even with larger n, if the strings are still random, (according to my answer) solution 2 would still perform better. In the worst case for n (string size) as small as 30000 (leetcode tests with much larger n!) the second approach is already about 50-100 times worse.
        – user202729
        Nov 19 at 5:35















      up vote
      86
      down vote










      up vote
      86
      down vote









      Consider:




      • f1(n) = n2

      • f2(n) = n + 1000


      Clearly f1 is O(n2) and f2 is O(n). For a small input (say, n=5), we have f1(n) = 25 but f2(n) > 1000.



      Just because one function (or time complexity) is O(n) and another is O(n2) doesn't mean that the former is smaller for all values of n, just that there is some n beyond which this will be the case.






      share|improve this answer












      Consider:




      • f1(n) = n2

      • f2(n) = n + 1000


      Clearly f1 is O(n2) and f2 is O(n). For a small input (say, n=5), we have f1(n) = 25 but f2(n) > 1000.



      Just because one function (or time complexity) is O(n) and another is O(n2) doesn't mean that the former is smaller for all values of n, just that there is some n beyond which this will be the case.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered Nov 17 at 23:02









      arshajii

      99.6k18178249




      99.6k18178249












      • Arguing that n is too small can be misleading. Even with larger n, if the strings are still random, (according to my answer) solution 2 would still perform better. In the worst case for n (string size) as small as 30000 (leetcode tests with much larger n!) the second approach is already about 50-100 times worse.
        – user202729
        Nov 19 at 5:35




















      • Arguing that n is too small can be misleading. Even with larger n, if the strings are still random, (according to my answer) solution 2 would still perform better. In the worst case for n (string size) as small as 30000 (leetcode tests with much larger n!) the second approach is already about 50-100 times worse.
        – user202729
        Nov 19 at 5:35


















      Arguing that n is too small can be misleading. Even with larger n, if the strings are still random, (according to my answer) solution 2 would still perform better. In the worst case for n (string size) as small as 30000 (leetcode tests with much larger n!) the second approach is already about 50-100 times worse.
      – user202729
      Nov 19 at 5:35






      Arguing that n is too small can be misleading. Even with larger n, if the strings are still random, (according to my answer) solution 2 would still perform better. In the worst case for n (string size) as small as 30000 (leetcode tests with much larger n!) the second approach is already about 50-100 times worse.
      – user202729
      Nov 19 at 5:35














      up vote
      37
      down vote













      For very short strings e.g. single character the cost of creating HashMap, re-sizing it, looking up entries while boxing and unboxing of char into Character might overshadow the cost of String.indexOf(), which is probably considered hot and inlined by JVM either way.



      Another reason might be the cost of RAM access. With additional HashMap, Character and Integer objects involved in a lookup there might be need for additional access to and from RAM. Single access is ~100 ns and this can add up.



      Take a look at Bjarne Stroustrup: Why you should avoid Linked Lists. This lecture illustrates that performance is not the same as complexity and memory access can be a killer for an algorithm.






      share|improve this answer























      • So, will it be correct to say that - theoretically approach 1 is faster but practically approach 2 has a better performance?
        – Nivedita
        Nov 17 at 23:03










      • @Nivedita for some cases, yes. Theoretical approach is not really faster, it just has a better complexity which is a different measure.
        – Karol Dowbecki
        Nov 17 at 23:04








      • 2




        @Nivedita - What you said is not true in general.
        – Stephen C
        Nov 17 at 23:16






      • 2




        Of course, if you're doing a Cracking the coding interview style question, N is always infinitely large ;). I'd be impressed with a candidate who, without prompting, told me how theory doesn't quite align with reality.
        – marko
        Nov 17 at 23:31






      • 6




        @Nivedita you're missing the point: O(n) is based on what happens as n gets large. Approach 1 isn't just faster in theory, approach 1 won't cause your application to hang for larger values of n then approach 2. Always, always, always think "how will this break? what can go wrong?". Never, never, never focus on the happy path.
        – Jared Smith
        Nov 18 at 3:02

















      up vote
      37
      down vote













      For very short strings e.g. single character the cost of creating HashMap, re-sizing it, looking up entries while boxing and unboxing of char into Character might overshadow the cost of String.indexOf(), which is probably considered hot and inlined by JVM either way.



      Another reason might be the cost of RAM access. With additional HashMap, Character and Integer objects involved in a lookup there might be need for additional access to and from RAM. Single access is ~100 ns and this can add up.



      Take a look at Bjarne Stroustrup: Why you should avoid Linked Lists. This lecture illustrates that performance is not the same as complexity and memory access can be a killer for an algorithm.






      share|improve this answer























      • So, will it be correct to say that - theoretically approach 1 is faster but practically approach 2 has a better performance?
        – Nivedita
        Nov 17 at 23:03










      • @Nivedita for some cases, yes. Theoretical approach is not really faster, it just has a better complexity which is a different measure.
        – Karol Dowbecki
        Nov 17 at 23:04








      • 2




        @Nivedita - What you said is not true in general.
        – Stephen C
        Nov 17 at 23:16






      • 2




        Of course, if you're doing a Cracking the coding interview style question, N is always infinitely large ;). I'd be impressed with a candidate who, without prompting, told me how theory doesn't quite align with reality.
        – marko
        Nov 17 at 23:31






      • 6




        @Nivedita you're missing the point: O(n) is based on what happens as n gets large. Approach 1 isn't just faster in theory, approach 1 won't cause your application to hang for larger values of n then approach 2. Always, always, always think "how will this break? what can go wrong?". Never, never, never focus on the happy path.
        – Jared Smith
        Nov 18 at 3:02















      up vote
      37
      down vote










      up vote
      37
      down vote









      For very short strings e.g. single character the cost of creating HashMap, re-sizing it, looking up entries while boxing and unboxing of char into Character might overshadow the cost of String.indexOf(), which is probably considered hot and inlined by JVM either way.



      Another reason might be the cost of RAM access. With additional HashMap, Character and Integer objects involved in a lookup there might be need for additional access to and from RAM. Single access is ~100 ns and this can add up.



      Take a look at Bjarne Stroustrup: Why you should avoid Linked Lists. This lecture illustrates that performance is not the same as complexity and memory access can be a killer for an algorithm.






      share|improve this answer














      For very short strings e.g. single character the cost of creating HashMap, re-sizing it, looking up entries while boxing and unboxing of char into Character might overshadow the cost of String.indexOf(), which is probably considered hot and inlined by JVM either way.



      Another reason might be the cost of RAM access. With additional HashMap, Character and Integer objects involved in a lookup there might be need for additional access to and from RAM. Single access is ~100 ns and this can add up.



      Take a look at Bjarne Stroustrup: Why you should avoid Linked Lists. This lecture illustrates that performance is not the same as complexity and memory access can be a killer for an algorithm.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Nov 19 at 4:39









      Peter Mortensen

      13.3k1983111




      13.3k1983111










      answered Nov 17 at 22:52









      Karol Dowbecki

      13.5k72745




      13.5k72745












      • So, will it be correct to say that - theoretically approach 1 is faster but practically approach 2 has a better performance?
        – Nivedita
        Nov 17 at 23:03










      • @Nivedita for some cases, yes. Theoretical approach is not really faster, it just has a better complexity which is a different measure.
        – Karol Dowbecki
        Nov 17 at 23:04








      • 2




        @Nivedita - What you said is not true in general.
        – Stephen C
        Nov 17 at 23:16






      • 2




        Of course, if you're doing a Cracking the coding interview style question, N is always infinitely large ;). I'd be impressed with a candidate who, without prompting, told me how theory doesn't quite align with reality.
        – marko
        Nov 17 at 23:31






      • 6




        @Nivedita you're missing the point: O(n) is based on what happens as n gets large. Approach 1 isn't just faster in theory, approach 1 won't cause your application to hang for larger values of n then approach 2. Always, always, always think "how will this break? what can go wrong?". Never, never, never focus on the happy path.
        – Jared Smith
        Nov 18 at 3:02




















      • So, will it be correct to say that - theoretically approach 1 is faster but practically approach 2 has a better performance?
        – Nivedita
        Nov 17 at 23:03










      • @Nivedita for some cases, yes. Theoretical approach is not really faster, it just has a better complexity which is a different measure.
        – Karol Dowbecki
        Nov 17 at 23:04








      • 2




        @Nivedita - What you said is not true in general.
        – Stephen C
        Nov 17 at 23:16






      • 2




        Of course, if you're doing a Cracking the coding interview style question, N is always infinitely large ;). I'd be impressed with a candidate who, without prompting, told me how theory doesn't quite align with reality.
        – marko
        Nov 17 at 23:31






      • 6




        @Nivedita you're missing the point: O(n) is based on what happens as n gets large. Approach 1 isn't just faster in theory, approach 1 won't cause your application to hang for larger values of n then approach 2. Always, always, always think "how will this break? what can go wrong?". Never, never, never focus on the happy path.
        – Jared Smith
        Nov 18 at 3:02


















      So, will it be correct to say that - theoretically approach 1 is faster but practically approach 2 has a better performance?
      – Nivedita
      Nov 17 at 23:03




      So, will it be correct to say that - theoretically approach 1 is faster but practically approach 2 has a better performance?
      – Nivedita
      Nov 17 at 23:03












      @Nivedita for some cases, yes. Theoretical approach is not really faster, it just has a better complexity which is a different measure.
      – Karol Dowbecki
      Nov 17 at 23:04






      @Nivedita for some cases, yes. Theoretical approach is not really faster, it just has a better complexity which is a different measure.
      – Karol Dowbecki
      Nov 17 at 23:04






      2




      2




      @Nivedita - What you said is not true in general.
      – Stephen C
      Nov 17 at 23:16




      @Nivedita - What you said is not true in general.
      – Stephen C
      Nov 17 at 23:16




      2




      2




      Of course, if you're doing a Cracking the coding interview style question, N is always infinitely large ;). I'd be impressed with a candidate who, without prompting, told me how theory doesn't quite align with reality.
      – marko
      Nov 17 at 23:31




      Of course, if you're doing a Cracking the coding interview style question, N is always infinitely large ;). I'd be impressed with a candidate who, without prompting, told me how theory doesn't quite align with reality.
      – marko
      Nov 17 at 23:31




      6




      6




      @Nivedita you're missing the point: O(n) is based on what happens as n gets large. Approach 1 isn't just faster in theory, approach 1 won't cause your application to hang for larger values of n then approach 2. Always, always, always think "how will this break? what can go wrong?". Never, never, never focus on the happy path.
      – Jared Smith
      Nov 18 at 3:02






      @Nivedita you're missing the point: O(n) is based on what happens as n gets large. Approach 1 isn't just faster in theory, approach 1 won't cause your application to hang for larger values of n then approach 2. Always, always, always think "how will this break? what can go wrong?". Never, never, never focus on the happy path.
      – Jared Smith
      Nov 18 at 3:02












      up vote
      16
      down vote













      Big O notation is a theoretical measure of the manner in which an algorithm scales in terms of memory consumption or computational time with N - the number of elements or dominant operations, and always as N->Infinity.



      In practice, N in your example is fairly small. Whilst adding an element to a hash-table is generally considered to be amortised O(1), it might also result in a memory allocation (again, depending on the the design of your hash-table). This may not be O(1) - and may also result in the process making a system call to the kernel for another page.



      Taking the O(n^2) solution - the string in a will quickly find itself in cache and will likely run uninterrupted. The cost of a single memory allocation will likely be higher than the pair of nested loops.



      In practice with modern CPU architectures, where reads form cache are orders of magnitude faster than those from main memory, N will be quite large before using a theoretically optimal algorithm outperforms a linear data structure and linear searching. Binary trees are particularly bad news for cache efficiency.



      [Edit] it's Java: The hash table contains references to boxed java.lang.Character object. Every single addition will result in a memory allocation






      share|improve this answer























      • "Every single addition will result in a memory allocation" -- yes, but note that Java virtual machines are usually highly optimized around the fact that most Java programs perform a lot of memory allocations. Java uses extremely fast allocation techniques with a trade-off of needing more complex reclamation of memory along with the hope that the latter won't be needed if the allocation pattern is simple enough or if the program is short-lived. The problem here is small and self-contained, so the GC should be able to deal with it quickly, too. Cache locality is almost certainly the key here.
        – Jules
        Nov 19 at 3:31












      • Boxing to Character does not necessarily lead to an allocation, as all values in the ASCII range are cached. But the HashMap itself creates an entry instance for every association.
        – Holger
        Nov 19 at 10:15

















      up vote
      16
      down vote













      Big O notation is a theoretical measure of the manner in which an algorithm scales in terms of memory consumption or computational time with N - the number of elements or dominant operations, and always as N->Infinity.



      In practice, N in your example is fairly small. Whilst adding an element to a hash-table is generally considered to be amortised O(1), it might also result in a memory allocation (again, depending on the the design of your hash-table). This may not be O(1) - and may also result in the process making a system call to the kernel for another page.



      Taking the O(n^2) solution - the string in a will quickly find itself in cache and will likely run uninterrupted. The cost of a single memory allocation will likely be higher than the pair of nested loops.



      In practice with modern CPU architectures, where reads form cache are orders of magnitude faster than those from main memory, N will be quite large before using a theoretically optimal algorithm outperforms a linear data structure and linear searching. Binary trees are particularly bad news for cache efficiency.



      [Edit] it's Java: The hash table contains references to boxed java.lang.Character object. Every single addition will result in a memory allocation






      share|improve this answer























      • "Every single addition will result in a memory allocation" -- yes, but note that Java virtual machines are usually highly optimized around the fact that most Java programs perform a lot of memory allocations. Java uses extremely fast allocation techniques with a trade-off of needing more complex reclamation of memory along with the hope that the latter won't be needed if the allocation pattern is simple enough or if the program is short-lived. The problem here is small and self-contained, so the GC should be able to deal with it quickly, too. Cache locality is almost certainly the key here.
        – Jules
        Nov 19 at 3:31












      • Boxing to Character does not necessarily lead to an allocation, as all values in the ASCII range are cached. But the HashMap itself creates an entry instance for every association.
        – Holger
        Nov 19 at 10:15















      up vote
      16
      down vote










      up vote
      16
      down vote









      Big O notation is a theoretical measure of the manner in which an algorithm scales in terms of memory consumption or computational time with N - the number of elements or dominant operations, and always as N->Infinity.



      In practice, N in your example is fairly small. Whilst adding an element to a hash-table is generally considered to be amortised O(1), it might also result in a memory allocation (again, depending on the the design of your hash-table). This may not be O(1) - and may also result in the process making a system call to the kernel for another page.



      Taking the O(n^2) solution - the string in a will quickly find itself in cache and will likely run uninterrupted. The cost of a single memory allocation will likely be higher than the pair of nested loops.



      In practice with modern CPU architectures, where reads form cache are orders of magnitude faster than those from main memory, N will be quite large before using a theoretically optimal algorithm outperforms a linear data structure and linear searching. Binary trees are particularly bad news for cache efficiency.



      [Edit] it's Java: The hash table contains references to boxed java.lang.Character object. Every single addition will result in a memory allocation






      share|improve this answer














      Big O notation is a theoretical measure of the manner in which an algorithm scales in terms of memory consumption or computational time with N - the number of elements or dominant operations, and always as N->Infinity.



      In practice, N in your example is fairly small. Whilst adding an element to a hash-table is generally considered to be amortised O(1), it might also result in a memory allocation (again, depending on the the design of your hash-table). This may not be O(1) - and may also result in the process making a system call to the kernel for another page.



      Taking the O(n^2) solution - the string in a will quickly find itself in cache and will likely run uninterrupted. The cost of a single memory allocation will likely be higher than the pair of nested loops.



      In practice with modern CPU architectures, where reads form cache are orders of magnitude faster than those from main memory, N will be quite large before using a theoretically optimal algorithm outperforms a linear data structure and linear searching. Binary trees are particularly bad news for cache efficiency.



      [Edit] it's Java: The hash table contains references to boxed java.lang.Character object. Every single addition will result in a memory allocation







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Nov 18 at 0:02

























      answered Nov 17 at 23:02









      marko

      7,67142338




      7,67142338












      • "Every single addition will result in a memory allocation" -- yes, but note that Java virtual machines are usually highly optimized around the fact that most Java programs perform a lot of memory allocations. Java uses extremely fast allocation techniques with a trade-off of needing more complex reclamation of memory along with the hope that the latter won't be needed if the allocation pattern is simple enough or if the program is short-lived. The problem here is small and self-contained, so the GC should be able to deal with it quickly, too. Cache locality is almost certainly the key here.
        – Jules
        Nov 19 at 3:31












      • Boxing to Character does not necessarily lead to an allocation, as all values in the ASCII range are cached. But the HashMap itself creates an entry instance for every association.
        – Holger
        Nov 19 at 10:15




















      • "Every single addition will result in a memory allocation" -- yes, but note that Java virtual machines are usually highly optimized around the fact that most Java programs perform a lot of memory allocations. Java uses extremely fast allocation techniques with a trade-off of needing more complex reclamation of memory along with the hope that the latter won't be needed if the allocation pattern is simple enough or if the program is short-lived. The problem here is small and self-contained, so the GC should be able to deal with it quickly, too. Cache locality is almost certainly the key here.
        – Jules
        Nov 19 at 3:31












      • Boxing to Character does not necessarily lead to an allocation, as all values in the ASCII range are cached. But the HashMap itself creates an entry instance for every association.
        – Holger
        Nov 19 at 10:15


















      "Every single addition will result in a memory allocation" -- yes, but note that Java virtual machines are usually highly optimized around the fact that most Java programs perform a lot of memory allocations. Java uses extremely fast allocation techniques with a trade-off of needing more complex reclamation of memory along with the hope that the latter won't be needed if the allocation pattern is simple enough or if the program is short-lived. The problem here is small and self-contained, so the GC should be able to deal with it quickly, too. Cache locality is almost certainly the key here.
      – Jules
      Nov 19 at 3:31






      "Every single addition will result in a memory allocation" -- yes, but note that Java virtual machines are usually highly optimized around the fact that most Java programs perform a lot of memory allocations. Java uses extremely fast allocation techniques with a trade-off of needing more complex reclamation of memory along with the hope that the latter won't be needed if the allocation pattern is simple enough or if the program is short-lived. The problem here is small and self-contained, so the GC should be able to deal with it quickly, too. Cache locality is almost certainly the key here.
      – Jules
      Nov 19 at 3:31














      Boxing to Character does not necessarily lead to an allocation, as all values in the ASCII range are cached. But the HashMap itself creates an entry instance for every association.
      – Holger
      Nov 19 at 10:15






      Boxing to Character does not necessarily lead to an allocation, as all values in the ASCII range are cached. But the HashMap itself creates an entry instance for every association.
      – Holger
      Nov 19 at 10:15












      up vote
      11
      down vote













      O(n2) is only the worst case time complexity of the second approach.



      For strings such as bbbbbb...bbbbbbbbbaaaaaaaaaaa...aaaaaaaaaaa where there are x b's and x a's, each loop iteration takes about x steps to determine the index, hence the total performed steps is about 2x2. For x about 30000, it would take about 1~2 second(s), while the other solution would perform much better.



      On Try it online, this benchmark computes that approach 2 is about 50 times slower than approach 1 for the string above. For larger x, the difference is even larger (approach 1 takes about 0.01 second, approach 2 takes a few seconds)



      However:



      For strings with each character chosen independently, uniformly from {a,b,c,...,z} [1], the expected time complexity should be O(n).



      This is true assuming Java uses the naive string searching algorithm, which searches the character one-by-one until a match is found, then immediately returns. The time complexity of the search is the number of considered characters.



      It can be easily proven (the proof is similar to this Math.SE post - Expected value of the number of flips until the first head) that the expected position of a particular character in a uniform independent string over the alphabet {a,b,c,...,z} is O(1). Therefore, each indexOf and lastIndexOf call runs in expected O(1) time, and the whole algorithm takes expected O(n) time.



      [1]: In the original leetcode challenge, it is said that




      You may assume the string contain only lowercase letters.




      However, that is not mentioned in the question.






      share|improve this answer























      • The problem with this analysis as far as I can see is that you limit the alphabet size to 26 characters. If we don't do this, I would assume that the length of the alphabet in comparison to the length of the string also influences the complexity.
        – Voo
        Nov 18 at 11:56










      • @voo In the leetcode example it's true. (although, unfortunately, it is not mentioned in the question)
        – user202729
        Nov 18 at 11:57










      • For all practical purposes the alphabet is probably going to be <100 absolutely and if the question even specifies it, even better. I just overlooked your set specification on first glance and was really confused how the statement could hold for arbitrary large character sets (although char is limited to 65k anyhow) and played around with it a bit.
        – Voo
        Nov 18 at 12:00















      up vote
      11
      down vote













      O(n2) is only the worst case time complexity of the second approach.



      For strings such as bbbbbb...bbbbbbbbbaaaaaaaaaaa...aaaaaaaaaaa where there are x b's and x a's, each loop iteration takes about x steps to determine the index, hence the total performed steps is about 2x2. For x about 30000, it would take about 1~2 second(s), while the other solution would perform much better.



      On Try it online, this benchmark computes that approach 2 is about 50 times slower than approach 1 for the string above. For larger x, the difference is even larger (approach 1 takes about 0.01 second, approach 2 takes a few seconds)



      However:



      For strings with each character chosen independently, uniformly from {a,b,c,...,z} [1], the expected time complexity should be O(n).



      This is true assuming Java uses the naive string searching algorithm, which searches the character one-by-one until a match is found, then immediately returns. The time complexity of the search is the number of considered characters.



      It can be easily proven (the proof is similar to this Math.SE post - Expected value of the number of flips until the first head) that the expected position of a particular character in a uniform independent string over the alphabet {a,b,c,...,z} is O(1). Therefore, each indexOf and lastIndexOf call runs in expected O(1) time, and the whole algorithm takes expected O(n) time.



      [1]: In the original leetcode challenge, it is said that




      You may assume the string contain only lowercase letters.




      However, that is not mentioned in the question.






      share|improve this answer























      • The problem with this analysis as far as I can see is that you limit the alphabet size to 26 characters. If we don't do this, I would assume that the length of the alphabet in comparison to the length of the string also influences the complexity.
        – Voo
        Nov 18 at 11:56










      • @voo In the leetcode example it's true. (although, unfortunately, it is not mentioned in the question)
        – user202729
        Nov 18 at 11:57










      • For all practical purposes the alphabet is probably going to be <100 absolutely and if the question even specifies it, even better. I just overlooked your set specification on first glance and was really confused how the statement could hold for arbitrary large character sets (although char is limited to 65k anyhow) and played around with it a bit.
        – Voo
        Nov 18 at 12:00













      up vote
      11
      down vote










      up vote
      11
      down vote









      O(n2) is only the worst case time complexity of the second approach.



      For strings such as bbbbbb...bbbbbbbbbaaaaaaaaaaa...aaaaaaaaaaa where there are x b's and x a's, each loop iteration takes about x steps to determine the index, hence the total performed steps is about 2x2. For x about 30000, it would take about 1~2 second(s), while the other solution would perform much better.



      On Try it online, this benchmark computes that approach 2 is about 50 times slower than approach 1 for the string above. For larger x, the difference is even larger (approach 1 takes about 0.01 second, approach 2 takes a few seconds)



      However:



      For strings with each character chosen independently, uniformly from {a,b,c,...,z} [1], the expected time complexity should be O(n).



      This is true assuming Java uses the naive string searching algorithm, which searches the character one-by-one until a match is found, then immediately returns. The time complexity of the search is the number of considered characters.



      It can be easily proven (the proof is similar to this Math.SE post - Expected value of the number of flips until the first head) that the expected position of a particular character in a uniform independent string over the alphabet {a,b,c,...,z} is O(1). Therefore, each indexOf and lastIndexOf call runs in expected O(1) time, and the whole algorithm takes expected O(n) time.



      [1]: In the original leetcode challenge, it is said that




      You may assume the string contain only lowercase letters.




      However, that is not mentioned in the question.






      share|improve this answer














      O(n2) is only the worst case time complexity of the second approach.



      For strings such as bbbbbb...bbbbbbbbbaaaaaaaaaaa...aaaaaaaaaaa where there are x b's and x a's, each loop iteration takes about x steps to determine the index, hence the total performed steps is about 2x2. For x about 30000, it would take about 1~2 second(s), while the other solution would perform much better.



      On Try it online, this benchmark computes that approach 2 is about 50 times slower than approach 1 for the string above. For larger x, the difference is even larger (approach 1 takes about 0.01 second, approach 2 takes a few seconds)



      However:



      For strings with each character chosen independently, uniformly from {a,b,c,...,z} [1], the expected time complexity should be O(n).



      This is true assuming Java uses the naive string searching algorithm, which searches the character one-by-one until a match is found, then immediately returns. The time complexity of the search is the number of considered characters.



      It can be easily proven (the proof is similar to this Math.SE post - Expected value of the number of flips until the first head) that the expected position of a particular character in a uniform independent string over the alphabet {a,b,c,...,z} is O(1). Therefore, each indexOf and lastIndexOf call runs in expected O(1) time, and the whole algorithm takes expected O(n) time.



      [1]: In the original leetcode challenge, it is said that




      You may assume the string contain only lowercase letters.




      However, that is not mentioned in the question.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited Nov 18 at 14:00

























      answered Nov 18 at 10:36









      user202729

      1,0693719




      1,0693719












      • The problem with this analysis as far as I can see is that you limit the alphabet size to 26 characters. If we don't do this, I would assume that the length of the alphabet in comparison to the length of the string also influences the complexity.
        – Voo
        Nov 18 at 11:56










      • @voo In the leetcode example it's true. (although, unfortunately, it is not mentioned in the question)
        – user202729
        Nov 18 at 11:57










      • For all practical purposes the alphabet is probably going to be <100 absolutely and if the question even specifies it, even better. I just overlooked your set specification on first glance and was really confused how the statement could hold for arbitrary large character sets (although char is limited to 65k anyhow) and played around with it a bit.
        – Voo
        Nov 18 at 12:00


















      • The problem with this analysis as far as I can see is that you limit the alphabet size to 26 characters. If we don't do this, I would assume that the length of the alphabet in comparison to the length of the string also influences the complexity.
        – Voo
        Nov 18 at 11:56










      • @voo In the leetcode example it's true. (although, unfortunately, it is not mentioned in the question)
        – user202729
        Nov 18 at 11:57










      • For all practical purposes the alphabet is probably going to be <100 absolutely and if the question even specifies it, even better. I just overlooked your set specification on first glance and was really confused how the statement could hold for arbitrary large character sets (although char is limited to 65k anyhow) and played around with it a bit.
        – Voo
        Nov 18 at 12:00
















      The problem with this analysis as far as I can see is that you limit the alphabet size to 26 characters. If we don't do this, I would assume that the length of the alphabet in comparison to the length of the string also influences the complexity.
      – Voo
      Nov 18 at 11:56




      The problem with this analysis as far as I can see is that you limit the alphabet size to 26 characters. If we don't do this, I would assume that the length of the alphabet in comparison to the length of the string also influences the complexity.
      – Voo
      Nov 18 at 11:56












      @voo In the leetcode example it's true. (although, unfortunately, it is not mentioned in the question)
      – user202729
      Nov 18 at 11:57




      @voo In the leetcode example it's true. (although, unfortunately, it is not mentioned in the question)
      – user202729
      Nov 18 at 11:57












      For all practical purposes the alphabet is probably going to be <100 absolutely and if the question even specifies it, even better. I just overlooked your set specification on first glance and was really confused how the statement could hold for arbitrary large character sets (although char is limited to 65k anyhow) and played around with it a bit.
      – Voo
      Nov 18 at 12:00




      For all practical purposes the alphabet is probably going to be <100 absolutely and if the question even specifies it, even better. I just overlooked your set specification on first glance and was really confused how the statement could hold for arbitrary large character sets (although char is limited to 65k anyhow) and played around with it a bit.
      – Voo
      Nov 18 at 12:00










      up vote
      3
      down vote













      Karol already provided a good explanation for your special case. I want to add a general remark regarding the big O notation for time complexity.



      In general this time complexity doesn't tell you too much about the actual performance. It just gives you an idea of the number of iterations needed by a particular algorithm.



      Let me put it like this: if you execute a huge amount of fast iterations this can still be faster than executing very few extremely slow iterations.






      share|improve this answer

























        up vote
        3
        down vote













        Karol already provided a good explanation for your special case. I want to add a general remark regarding the big O notation for time complexity.



        In general this time complexity doesn't tell you too much about the actual performance. It just gives you an idea of the number of iterations needed by a particular algorithm.



        Let me put it like this: if you execute a huge amount of fast iterations this can still be faster than executing very few extremely slow iterations.






        share|improve this answer























          up vote
          3
          down vote










          up vote
          3
          down vote









          Karol already provided a good explanation for your special case. I want to add a general remark regarding the big O notation for time complexity.



          In general this time complexity doesn't tell you too much about the actual performance. It just gives you an idea of the number of iterations needed by a particular algorithm.



          Let me put it like this: if you execute a huge amount of fast iterations this can still be faster than executing very few extremely slow iterations.






          share|improve this answer












          Karol already provided a good explanation for your special case. I want to add a general remark regarding the big O notation for time complexity.



          In general this time complexity doesn't tell you too much about the actual performance. It just gives you an idea of the number of iterations needed by a particular algorithm.



          Let me put it like this: if you execute a huge amount of fast iterations this can still be faster than executing very few extremely slow iterations.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 17 at 23:07









          yaccob

          59359




          59359






















              up vote
              2
              down vote













              First, complexity analysis does not tell you an awful lot. It used to tell you how algorithms would -- in theory -- compare as the size of the problem grows to large numbers (towards infinity, if you will), and to some extent it still does.

              However, complexity analysis makes assumptions that were only half-true some 30-40 years ago and are not in any way true nowadays (such as e.g. all ops are the same, all accesses are the same). We live in a world in which constant factors are huge, and not all operations are the same, not even remotely. Insofar, it has to be considered with very great care, in no case can you assume "this is O(N) so it will be faster". That's a huge fallacy.



              For small numbers, looking at "big O" is mostly pointless, but even for large numbers, be aware that the constant factor can play a huge, dominating role. No, the constant factor is not zero, and it is not negligible. Do not ever assume that.

              The theoretically super awesome algorithm which, for example, finds something in a billion elements with only 20 accesses can be much slower than a "bad" algorithm that takes 200,000 accesses -- if in the first case each of the 20 accesses causes a page fault with a disk seek (each of which is worth some hundred million operations). Theory and practice do not always go hand in hand here.



              Second, despite being idiomatic and generally looking like a good idea (it's O(1), eh?), using a hash map is just bad in many cases. Not in every case, but this is such a case. Compare what the two code snippets are doing.



              The O(N2) one converts a moderately small string to a character array once (which basically costs zero) and then repeatedly accesses that array in a linear fashion. Which is pretty much the fastest thing a computer is able to do, even in Java. Yes, Java is agnostic of any such thing as memory or caches, but that cannot change the fact that these things exist. Locally accessing small/moderate amounts of data in a mostly linear fashion is fast.



              The other snippet inserts characters into a hashmap, allocating a data structure for every character. Yes, dynamic allocations in Java are not that expensive, but still, allocations are nowhere near free, and memory accesses become non-contiguous.

              Then, a hash function is calculated. This is something that is often overlooked with hash maps. For a single character, this is (hopefully) a cheap operation, but it is nowhere near free[1]. Then, the data structure is in some way inserted into a bucket (which is technically nothing but another non-coherent memory access). Now, there's a fair chance of a collision, in which case something else must be done (chaining, rehashing, whatever).

              Later, values are being read from the hashmap again, which again involves calling the hash function, looking up the bucket, possibly traversing a list, and doing a comparison at each node (this is necessary due to the possibility of collisions).



              Every operation thus involves at least two indirections, plus some calculations. That, all in all, is painfully expensive compared to just iterating over a small array a couple of times.






              [1] Not an issue here for single-character keys but still, fun fact: People often talk of hash maps in terms of O(1) which already isn't true with e.g. chaining, but are then surprised that actually hashing the key is O(N) in respect of the length of the key. Which may very well be noticeable.




              share|improve this answer



























                up vote
                2
                down vote













                First, complexity analysis does not tell you an awful lot. It used to tell you how algorithms would -- in theory -- compare as the size of the problem grows to large numbers (towards infinity, if you will), and to some extent it still does.

                However, complexity analysis makes assumptions that were only half-true some 30-40 years ago and are not in any way true nowadays (such as e.g. all ops are the same, all accesses are the same). We live in a world in which constant factors are huge, and not all operations are the same, not even remotely. Insofar, it has to be considered with very great care, in no case can you assume "this is O(N) so it will be faster". That's a huge fallacy.



                For small numbers, looking at "big O" is mostly pointless, but even for large numbers, be aware that the constant factor can play a huge, dominating role. No, the constant factor is not zero, and it is not negligible. Do not ever assume that.

                The theoretically super awesome algorithm which, for example, finds something in a billion elements with only 20 accesses can be much slower than a "bad" algorithm that takes 200,000 accesses -- if in the first case each of the 20 accesses causes a page fault with a disk seek (each of which is worth some hundred million operations). Theory and practice do not always go hand in hand here.



                Second, despite being idiomatic and generally looking like a good idea (it's O(1), eh?), using a hash map is just bad in many cases. Not in every case, but this is such a case. Compare what the two code snippets are doing.



                The O(N2) one converts a moderately small string to a character array once (which basically costs zero) and then repeatedly accesses that array in a linear fashion. Which is pretty much the fastest thing a computer is able to do, even in Java. Yes, Java is agnostic of any such thing as memory or caches, but that cannot change the fact that these things exist. Locally accessing small/moderate amounts of data in a mostly linear fashion is fast.



                The other snippet inserts characters into a hashmap, allocating a data structure for every character. Yes, dynamic allocations in Java are not that expensive, but still, allocations are nowhere near free, and memory accesses become non-contiguous.

                Then, a hash function is calculated. This is something that is often overlooked with hash maps. For a single character, this is (hopefully) a cheap operation, but it is nowhere near free[1]. Then, the data structure is in some way inserted into a bucket (which is technically nothing but another non-coherent memory access). Now, there's a fair chance of a collision, in which case something else must be done (chaining, rehashing, whatever).

                Later, values are being read from the hashmap again, which again involves calling the hash function, looking up the bucket, possibly traversing a list, and doing a comparison at each node (this is necessary due to the possibility of collisions).



                Every operation thus involves at least two indirections, plus some calculations. That, all in all, is painfully expensive compared to just iterating over a small array a couple of times.






                [1] Not an issue here for single-character keys but still, fun fact: People often talk of hash maps in terms of O(1) which already isn't true with e.g. chaining, but are then surprised that actually hashing the key is O(N) in respect of the length of the key. Which may very well be noticeable.




                share|improve this answer

























                  up vote
                  2
                  down vote










                  up vote
                  2
                  down vote









                  First, complexity analysis does not tell you an awful lot. It used to tell you how algorithms would -- in theory -- compare as the size of the problem grows to large numbers (towards infinity, if you will), and to some extent it still does.

                  However, complexity analysis makes assumptions that were only half-true some 30-40 years ago and are not in any way true nowadays (such as e.g. all ops are the same, all accesses are the same). We live in a world in which constant factors are huge, and not all operations are the same, not even remotely. Insofar, it has to be considered with very great care, in no case can you assume "this is O(N) so it will be faster". That's a huge fallacy.



                  For small numbers, looking at "big O" is mostly pointless, but even for large numbers, be aware that the constant factor can play a huge, dominating role. No, the constant factor is not zero, and it is not negligible. Do not ever assume that.

                  The theoretically super awesome algorithm which, for example, finds something in a billion elements with only 20 accesses can be much slower than a "bad" algorithm that takes 200,000 accesses -- if in the first case each of the 20 accesses causes a page fault with a disk seek (each of which is worth some hundred million operations). Theory and practice do not always go hand in hand here.



                  Second, despite being idiomatic and generally looking like a good idea (it's O(1), eh?), using a hash map is just bad in many cases. Not in every case, but this is such a case. Compare what the two code snippets are doing.



                  The O(N2) one converts a moderately small string to a character array once (which basically costs zero) and then repeatedly accesses that array in a linear fashion. Which is pretty much the fastest thing a computer is able to do, even in Java. Yes, Java is agnostic of any such thing as memory or caches, but that cannot change the fact that these things exist. Locally accessing small/moderate amounts of data in a mostly linear fashion is fast.



                  The other snippet inserts characters into a hashmap, allocating a data structure for every character. Yes, dynamic allocations in Java are not that expensive, but still, allocations are nowhere near free, and memory accesses become non-contiguous.

                  Then, a hash function is calculated. This is something that is often overlooked with hash maps. For a single character, this is (hopefully) a cheap operation, but it is nowhere near free[1]. Then, the data structure is in some way inserted into a bucket (which is technically nothing but another non-coherent memory access). Now, there's a fair chance of a collision, in which case something else must be done (chaining, rehashing, whatever).

                  Later, values are being read from the hashmap again, which again involves calling the hash function, looking up the bucket, possibly traversing a list, and doing a comparison at each node (this is necessary due to the possibility of collisions).



                  Every operation thus involves at least two indirections, plus some calculations. That, all in all, is painfully expensive compared to just iterating over a small array a couple of times.






                  [1] Not an issue here for single-character keys but still, fun fact: People often talk of hash maps in terms of O(1) which already isn't true with e.g. chaining, but are then surprised that actually hashing the key is O(N) in respect of the length of the key. Which may very well be noticeable.




                  share|improve this answer














                  First, complexity analysis does not tell you an awful lot. It used to tell you how algorithms would -- in theory -- compare as the size of the problem grows to large numbers (towards infinity, if you will), and to some extent it still does.

                  However, complexity analysis makes assumptions that were only half-true some 30-40 years ago and are not in any way true nowadays (such as e.g. all ops are the same, all accesses are the same). We live in a world in which constant factors are huge, and not all operations are the same, not even remotely. Insofar, it has to be considered with very great care, in no case can you assume "this is O(N) so it will be faster". That's a huge fallacy.



                  For small numbers, looking at "big O" is mostly pointless, but even for large numbers, be aware that the constant factor can play a huge, dominating role. No, the constant factor is not zero, and it is not negligible. Do not ever assume that.

                  The theoretically super awesome algorithm which, for example, finds something in a billion elements with only 20 accesses can be much slower than a "bad" algorithm that takes 200,000 accesses -- if in the first case each of the 20 accesses causes a page fault with a disk seek (each of which is worth some hundred million operations). Theory and practice do not always go hand in hand here.



                  Second, despite being idiomatic and generally looking like a good idea (it's O(1), eh?), using a hash map is just bad in many cases. Not in every case, but this is such a case. Compare what the two code snippets are doing.



                  The O(N2) one converts a moderately small string to a character array once (which basically costs zero) and then repeatedly accesses that array in a linear fashion. Which is pretty much the fastest thing a computer is able to do, even in Java. Yes, Java is agnostic of any such thing as memory or caches, but that cannot change the fact that these things exist. Locally accessing small/moderate amounts of data in a mostly linear fashion is fast.



                  The other snippet inserts characters into a hashmap, allocating a data structure for every character. Yes, dynamic allocations in Java are not that expensive, but still, allocations are nowhere near free, and memory accesses become non-contiguous.

                  Then, a hash function is calculated. This is something that is often overlooked with hash maps. For a single character, this is (hopefully) a cheap operation, but it is nowhere near free[1]. Then, the data structure is in some way inserted into a bucket (which is technically nothing but another non-coherent memory access). Now, there's a fair chance of a collision, in which case something else must be done (chaining, rehashing, whatever).

                  Later, values are being read from the hashmap again, which again involves calling the hash function, looking up the bucket, possibly traversing a list, and doing a comparison at each node (this is necessary due to the possibility of collisions).



                  Every operation thus involves at least two indirections, plus some calculations. That, all in all, is painfully expensive compared to just iterating over a small array a couple of times.






                  [1] Not an issue here for single-character keys but still, fun fact: People often talk of hash maps in terms of O(1) which already isn't true with e.g. chaining, but are then surprised that actually hashing the key is O(N) in respect of the length of the key. Which may very well be noticeable.





                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Nov 20 at 12:55

























                  answered Nov 19 at 9:46









                  Damon

                  50.3k1497152




                  50.3k1497152






















                      up vote
                      0
                      down vote













                      I've ported the functions to C++(17) to see if the difference was caused by algorithm complexity or Java.



                      #include <map>
                      #include <string_view>
                      int first_unique_char(char s, int s_len) noexcept {
                      std::map<char, int> char_hash;
                      int res = -1;
                      for (int i = 0; i < s_len; i++) {
                      char c = s[i];
                      auto r = char_hash.find(c);
                      if (r == char_hash.end())
                      char_hash.insert(std::pair<char, int>(c,1));
                      else {
                      int new_val = r->second + 1;
                      char_hash.erase(c);
                      char_hash.insert(std::pair<char, int>(c, new_val));
                      }
                      }
                      for (int i = 0; i < s_len; i++)
                      if (char_hash.find(s[i])->second == 1) {
                      res = i;
                      break;
                      }
                      return res;
                      }
                      int first_unique_char2(char s, int s_len) noexcept {
                      int res = -1;
                      std::string_view str = std::string_view(s, s_len);
                      for (int i = 0; i < s_len; i++) {
                      char c = s[i];
                      if (str.find_first_of(c) == str.find_last_of(c)) {
                      res = i;
                      break;
                      }
                      }
                      return res;
                      }


                      The result was:




                      The second one is ~30% faster for leetcode.




                      Later, I noticed that



                          if (r == char_hash.end())
                      char_hash.insert(std::pair<char, int>(c,1));
                      else {
                      int new_val = r->second + 1;
                      char_hash.erase(c);
                      char_hash.insert(std::pair<char, int>(c, new_val));
                      }


                      could be optimized to



                          char_hash.try_emplace(c, 1);


                      Which also confirms that complexity is not only the only thing. There's "input length", which other answers have covered and lastly, I noticed that




                      Implementation also makes a difference. Longer code hides optimization opportunities.







                      share|improve this answer



















                      • 2




                        This isn't the same thing as the equivalent of HashMap would rather be std::unordered_map. std::map would be a TreeMap AFAIK.
                        – Moira
                        Nov 18 at 14:12












                      • ^, std::map performance is poor compared to umap for these things.
                        – Sombrero Chicken
                        Nov 18 at 17:07















                      up vote
                      0
                      down vote













                      I've ported the functions to C++(17) to see if the difference was caused by algorithm complexity or Java.



                      #include <map>
                      #include <string_view>
                      int first_unique_char(char s, int s_len) noexcept {
                      std::map<char, int> char_hash;
                      int res = -1;
                      for (int i = 0; i < s_len; i++) {
                      char c = s[i];
                      auto r = char_hash.find(c);
                      if (r == char_hash.end())
                      char_hash.insert(std::pair<char, int>(c,1));
                      else {
                      int new_val = r->second + 1;
                      char_hash.erase(c);
                      char_hash.insert(std::pair<char, int>(c, new_val));
                      }
                      }
                      for (int i = 0; i < s_len; i++)
                      if (char_hash.find(s[i])->second == 1) {
                      res = i;
                      break;
                      }
                      return res;
                      }
                      int first_unique_char2(char s, int s_len) noexcept {
                      int res = -1;
                      std::string_view str = std::string_view(s, s_len);
                      for (int i = 0; i < s_len; i++) {
                      char c = s[i];
                      if (str.find_first_of(c) == str.find_last_of(c)) {
                      res = i;
                      break;
                      }
                      }
                      return res;
                      }


                      The result was:




                      The second one is ~30% faster for leetcode.




                      Later, I noticed that



                          if (r == char_hash.end())
                      char_hash.insert(std::pair<char, int>(c,1));
                      else {
                      int new_val = r->second + 1;
                      char_hash.erase(c);
                      char_hash.insert(std::pair<char, int>(c, new_val));
                      }


                      could be optimized to



                          char_hash.try_emplace(c, 1);


                      Which also confirms that complexity is not only the only thing. There's "input length", which other answers have covered and lastly, I noticed that




                      Implementation also makes a difference. Longer code hides optimization opportunities.







                      share|improve this answer



















                      • 2




                        This isn't the same thing as the equivalent of HashMap would rather be std::unordered_map. std::map would be a TreeMap AFAIK.
                        – Moira
                        Nov 18 at 14:12












                      • ^, std::map performance is poor compared to umap for these things.
                        – Sombrero Chicken
                        Nov 18 at 17:07













                      up vote
                      0
                      down vote










                      up vote
                      0
                      down vote









                      I've ported the functions to C++(17) to see if the difference was caused by algorithm complexity or Java.



                      #include <map>
                      #include <string_view>
                      int first_unique_char(char s, int s_len) noexcept {
                      std::map<char, int> char_hash;
                      int res = -1;
                      for (int i = 0; i < s_len; i++) {
                      char c = s[i];
                      auto r = char_hash.find(c);
                      if (r == char_hash.end())
                      char_hash.insert(std::pair<char, int>(c,1));
                      else {
                      int new_val = r->second + 1;
                      char_hash.erase(c);
                      char_hash.insert(std::pair<char, int>(c, new_val));
                      }
                      }
                      for (int i = 0; i < s_len; i++)
                      if (char_hash.find(s[i])->second == 1) {
                      res = i;
                      break;
                      }
                      return res;
                      }
                      int first_unique_char2(char s, int s_len) noexcept {
                      int res = -1;
                      std::string_view str = std::string_view(s, s_len);
                      for (int i = 0; i < s_len; i++) {
                      char c = s[i];
                      if (str.find_first_of(c) == str.find_last_of(c)) {
                      res = i;
                      break;
                      }
                      }
                      return res;
                      }


                      The result was:




                      The second one is ~30% faster for leetcode.




                      Later, I noticed that



                          if (r == char_hash.end())
                      char_hash.insert(std::pair<char, int>(c,1));
                      else {
                      int new_val = r->second + 1;
                      char_hash.erase(c);
                      char_hash.insert(std::pair<char, int>(c, new_val));
                      }


                      could be optimized to



                          char_hash.try_emplace(c, 1);


                      Which also confirms that complexity is not only the only thing. There's "input length", which other answers have covered and lastly, I noticed that




                      Implementation also makes a difference. Longer code hides optimization opportunities.







                      share|improve this answer














                      I've ported the functions to C++(17) to see if the difference was caused by algorithm complexity or Java.



                      #include <map>
                      #include <string_view>
                      int first_unique_char(char s, int s_len) noexcept {
                      std::map<char, int> char_hash;
                      int res = -1;
                      for (int i = 0; i < s_len; i++) {
                      char c = s[i];
                      auto r = char_hash.find(c);
                      if (r == char_hash.end())
                      char_hash.insert(std::pair<char, int>(c,1));
                      else {
                      int new_val = r->second + 1;
                      char_hash.erase(c);
                      char_hash.insert(std::pair<char, int>(c, new_val));
                      }
                      }
                      for (int i = 0; i < s_len; i++)
                      if (char_hash.find(s[i])->second == 1) {
                      res = i;
                      break;
                      }
                      return res;
                      }
                      int first_unique_char2(char s, int s_len) noexcept {
                      int res = -1;
                      std::string_view str = std::string_view(s, s_len);
                      for (int i = 0; i < s_len; i++) {
                      char c = s[i];
                      if (str.find_first_of(c) == str.find_last_of(c)) {
                      res = i;
                      break;
                      }
                      }
                      return res;
                      }


                      The result was:




                      The second one is ~30% faster for leetcode.




                      Later, I noticed that



                          if (r == char_hash.end())
                      char_hash.insert(std::pair<char, int>(c,1));
                      else {
                      int new_val = r->second + 1;
                      char_hash.erase(c);
                      char_hash.insert(std::pair<char, int>(c, new_val));
                      }


                      could be optimized to



                          char_hash.try_emplace(c, 1);


                      Which also confirms that complexity is not only the only thing. There's "input length", which other answers have covered and lastly, I noticed that




                      Implementation also makes a difference. Longer code hides optimization opportunities.








                      share|improve this answer














                      share|improve this answer



                      share|improve this answer








                      edited Nov 18 at 14:47

























                      answered Nov 18 at 13:40









                      MCCCS

                      3741727




                      3741727








                      • 2




                        This isn't the same thing as the equivalent of HashMap would rather be std::unordered_map. std::map would be a TreeMap AFAIK.
                        – Moira
                        Nov 18 at 14:12












                      • ^, std::map performance is poor compared to umap for these things.
                        – Sombrero Chicken
                        Nov 18 at 17:07














                      • 2




                        This isn't the same thing as the equivalent of HashMap would rather be std::unordered_map. std::map would be a TreeMap AFAIK.
                        – Moira
                        Nov 18 at 14:12












                      • ^, std::map performance is poor compared to umap for these things.
                        – Sombrero Chicken
                        Nov 18 at 17:07








                      2




                      2




                      This isn't the same thing as the equivalent of HashMap would rather be std::unordered_map. std::map would be a TreeMap AFAIK.
                      – Moira
                      Nov 18 at 14:12






                      This isn't the same thing as the equivalent of HashMap would rather be std::unordered_map. std::map would be a TreeMap AFAIK.
                      – Moira
                      Nov 18 at 14:12














                      ^, std::map performance is poor compared to umap for these things.
                      – Sombrero Chicken
                      Nov 18 at 17:07




                      ^, std::map performance is poor compared to umap for these things.
                      – Sombrero Chicken
                      Nov 18 at 17:07



                      Popular posts from this blog

                      AnyDesk - Fatal Program Failure

                      How to calibrate 16:9 built-in touch-screen to a 4:3 resolution?

                      QoS: MAC-Priority for clients behind a repeater