Find every possible pair of numbers that sum up to n number
up vote
7
down vote
favorite
I have this following code which find every possible pair of numbers that sum up to n number
lst = [int(input()) for i in range(int(input()))]
num = int(input()) #Amount to be matched
cnt = 0
lst = list(filter(lambda i: i <= num, lst)) #Remove number that more than `num`
for i in lst:
for j in lst:
if i+j == num:
cnt += 1
print(int(cnt/2))
For example, if I enter
5 #How many numbers
1 < #Start of number input
4 <
5
7
1 < #End of number input
5 #Amount to be matched
It will return 2 because there is two pair that their sum is equal to 5 (1,4) and (4,1) (The number I marked with < ).
The problem is the complexity is O(n2) which will run slow on large input. I wanted to know if that is there a way to make this run faster?
Another example:
10 #How many numbers
46 #Start of number input
35
27
45
16
0 <
30 <
30 <
45
37 #End of number input
30 #Amount to be matched
The pair will be (0, 30) and (0, 30) which will return 2.
python algorithm k-sum
New contributor
add a comment |
up vote
7
down vote
favorite
I have this following code which find every possible pair of numbers that sum up to n number
lst = [int(input()) for i in range(int(input()))]
num = int(input()) #Amount to be matched
cnt = 0
lst = list(filter(lambda i: i <= num, lst)) #Remove number that more than `num`
for i in lst:
for j in lst:
if i+j == num:
cnt += 1
print(int(cnt/2))
For example, if I enter
5 #How many numbers
1 < #Start of number input
4 <
5
7
1 < #End of number input
5 #Amount to be matched
It will return 2 because there is two pair that their sum is equal to 5 (1,4) and (4,1) (The number I marked with < ).
The problem is the complexity is O(n2) which will run slow on large input. I wanted to know if that is there a way to make this run faster?
Another example:
10 #How many numbers
46 #Start of number input
35
27
45
16
0 <
30 <
30 <
45
37 #End of number input
30 #Amount to be matched
The pair will be (0, 30) and (0, 30) which will return 2.
python algorithm k-sum
New contributor
3
If you store them in aset
you can, for each number entered, find ifamount - number
is in the set. If you ask user to enteramount
BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
– bipll
Nov 21 at 13:37
@bipll Will try, Thanks!
– phwt
Nov 21 at 13:48
1
@phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
– Toby Speight
Nov 21 at 16:28
Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
– Eric Lippert
Nov 21 at 20:01
add a comment |
up vote
7
down vote
favorite
up vote
7
down vote
favorite
I have this following code which find every possible pair of numbers that sum up to n number
lst = [int(input()) for i in range(int(input()))]
num = int(input()) #Amount to be matched
cnt = 0
lst = list(filter(lambda i: i <= num, lst)) #Remove number that more than `num`
for i in lst:
for j in lst:
if i+j == num:
cnt += 1
print(int(cnt/2))
For example, if I enter
5 #How many numbers
1 < #Start of number input
4 <
5
7
1 < #End of number input
5 #Amount to be matched
It will return 2 because there is two pair that their sum is equal to 5 (1,4) and (4,1) (The number I marked with < ).
The problem is the complexity is O(n2) which will run slow on large input. I wanted to know if that is there a way to make this run faster?
Another example:
10 #How many numbers
46 #Start of number input
35
27
45
16
0 <
30 <
30 <
45
37 #End of number input
30 #Amount to be matched
The pair will be (0, 30) and (0, 30) which will return 2.
python algorithm k-sum
New contributor
I have this following code which find every possible pair of numbers that sum up to n number
lst = [int(input()) for i in range(int(input()))]
num = int(input()) #Amount to be matched
cnt = 0
lst = list(filter(lambda i: i <= num, lst)) #Remove number that more than `num`
for i in lst:
for j in lst:
if i+j == num:
cnt += 1
print(int(cnt/2))
For example, if I enter
5 #How many numbers
1 < #Start of number input
4 <
5
7
1 < #End of number input
5 #Amount to be matched
It will return 2 because there is two pair that their sum is equal to 5 (1,4) and (4,1) (The number I marked with < ).
The problem is the complexity is O(n2) which will run slow on large input. I wanted to know if that is there a way to make this run faster?
Another example:
10 #How many numbers
46 #Start of number input
35
27
45
16
0 <
30 <
30 <
45
37 #End of number input
30 #Amount to be matched
The pair will be (0, 30) and (0, 30) which will return 2.
python algorithm k-sum
python algorithm k-sum
New contributor
New contributor
edited Nov 22 at 4:09
200_success
127k15148411
127k15148411
New contributor
asked Nov 21 at 13:05
phwt
386
386
New contributor
New contributor
3
If you store them in aset
you can, for each number entered, find ifamount - number
is in the set. If you ask user to enteramount
BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
– bipll
Nov 21 at 13:37
@bipll Will try, Thanks!
– phwt
Nov 21 at 13:48
1
@phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
– Toby Speight
Nov 21 at 16:28
Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
– Eric Lippert
Nov 21 at 20:01
add a comment |
3
If you store them in aset
you can, for each number entered, find ifamount - number
is in the set. If you ask user to enteramount
BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.
– bipll
Nov 21 at 13:37
@bipll Will try, Thanks!
– phwt
Nov 21 at 13:48
1
@phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
– Toby Speight
Nov 21 at 16:28
Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
– Eric Lippert
Nov 21 at 20:01
3
3
If you store them in a
set
you can, for each number entered, find if amount - number
is in the set. If you ask user to enter amount
BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.– bipll
Nov 21 at 13:37
If you store them in a
set
you can, for each number entered, find if amount - number
is in the set. If you ask user to enter amount
BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.– bipll
Nov 21 at 13:37
@bipll Will try, Thanks!
– phwt
Nov 21 at 13:48
@bipll Will try, Thanks!
– phwt
Nov 21 at 13:48
1
1
@phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
– Toby Speight
Nov 21 at 16:28
@phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
– Toby Speight
Nov 21 at 16:28
Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
– Eric Lippert
Nov 21 at 20:01
Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
– Eric Lippert
Nov 21 at 20:01
add a comment |
4 Answers
4
active
oldest
votes
up vote
10
down vote
accepted
Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.
You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.Counter
, which has amortized O(1) insertion and lookup), then we come down to O(n).
If we have a Counter
, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).
We could do with some auto tests. Consider importing the doctest
module and using it. Some good test cases to include:
1, → 0
1, [1] → 0
1, [0, 1] → 1
0, [-1, 1] → 1
0, [0, 1] → 0
4, [1, 4, 3, 0] → 2
4, [1, 1, 3, 3] → 4
4, [2, 2, 2, 2] → 6
add a comment |
up vote
6
down vote
So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:
- Get your input into a list, as you have done so. Obviously this is O(n)
- Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.
- Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).
- Now we take our target, call it
sum
and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key isk
. We computesum-k
. Now there are two cases.
- Case 1: if
sum-k == k
then check the map to see if the value associated withk
is 2 or greater. If it is, then we have a pair(k, k)
. Output it. - Case 2: if
sum-k
is notk
then check the map to see ifsum-k
is in the map. If it is, then we have a pair(k, sum-k)
.
- Case 1: if
- The enumeration enumerates at most
n
keys, and each step is O(1), so this step is also O(n) - And we're done, with total cost O(n).
Now, can you implement this solution?
I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
– ChatterOne
Nov 21 at 20:48
1
@ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
– Eric Lippert
Nov 21 at 20:54
@ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
– Eric Lippert
Nov 21 at 20:58
@EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
– Brian
Nov 21 at 21:26
1
@Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
– Eric Lippert
Nov 21 at 21:29
|
show 1 more comment
up vote
2
down vote
Other minor suggestions:
- Don't leave your
input()
s blank. Pass a prompt so that the user knows what they're entering. - The first time you initialize
lst
, it doesn't need to be memory; it can be left as a generator (parens instead of brackets). - The second time you initialize
lst
, it does not need to be mutable, so make it atuple
instead of alist
.
add a comment |
up vote
2
down vote
You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:
head = 0
tail = len(list) - 1
while (head < tail):
sum = list[head] + list[tail]
if sum == num:
cnt += 1
head += 1
tail -= 1
elif sum > num:
tail -= 1
else:
head += 1
This results in an overall complexity of O(nlog(n))
Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.
New contributor
1
Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
– Eric Lippert
Nov 21 at 20:17
There is a linear time solution, which I've given in my answer. See if you can find it!
– Eric Lippert
Nov 21 at 20:17
@EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
– David
2 days ago
add a comment |
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
10
down vote
accepted
Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.
You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.Counter
, which has amortized O(1) insertion and lookup), then we come down to O(n).
If we have a Counter
, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).
We could do with some auto tests. Consider importing the doctest
module and using it. Some good test cases to include:
1, → 0
1, [1] → 0
1, [0, 1] → 1
0, [-1, 1] → 1
0, [0, 1] → 0
4, [1, 4, 3, 0] → 2
4, [1, 1, 3, 3] → 4
4, [2, 2, 2, 2] → 6
add a comment |
up vote
10
down vote
accepted
Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.
You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.Counter
, which has amortized O(1) insertion and lookup), then we come down to O(n).
If we have a Counter
, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).
We could do with some auto tests. Consider importing the doctest
module and using it. Some good test cases to include:
1, → 0
1, [1] → 0
1, [0, 1] → 1
0, [-1, 1] → 1
0, [0, 1] → 0
4, [1, 4, 3, 0] → 2
4, [1, 1, 3, 3] → 4
4, [2, 2, 2, 2] → 6
add a comment |
up vote
10
down vote
accepted
up vote
10
down vote
accepted
Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.
You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.Counter
, which has amortized O(1) insertion and lookup), then we come down to O(n).
If we have a Counter
, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).
We could do with some auto tests. Consider importing the doctest
module and using it. Some good test cases to include:
1, → 0
1, [1] → 0
1, [0, 1] → 1
0, [-1, 1] → 1
0, [0, 1] → 0
4, [1, 4, 3, 0] → 2
4, [1, 1, 3, 3] → 4
4, [2, 2, 2, 2] → 6
Removing numbers greater than the target affects the correctness of this program - they could be added to negative numbers to reach the target.
You could get a performance improvement by finding the difference between each element and the target, then looking to see if that difference is in the list. This doesn't in itself reduce the computational complexity (it's still O(n²)), but we can build on that: if the list is first sorted, and we then use a binary search to test membership we get to O(n log n); if we convert to a structure with fast lookup (such as a collections.Counter
, which has amortized O(1) insertion and lookup), then we come down to O(n).
If we have a Counter
, then we can account for all combinations of that pair by multiplying one count by the other (but we'll need to consider the special case that the number is exactly half the target).
We could do with some auto tests. Consider importing the doctest
module and using it. Some good test cases to include:
1, → 0
1, [1] → 0
1, [0, 1] → 1
0, [-1, 1] → 1
0, [0, 1] → 0
4, [1, 4, 3, 0] → 2
4, [1, 1, 3, 3] → 4
4, [2, 2, 2, 2] → 6
edited Nov 21 at 22:26
answered Nov 21 at 13:25
Toby Speight
22.4k537109
22.4k537109
add a comment |
add a comment |
up vote
6
down vote
So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:
- Get your input into a list, as you have done so. Obviously this is O(n)
- Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.
- Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).
- Now we take our target, call it
sum
and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key isk
. We computesum-k
. Now there are two cases.
- Case 1: if
sum-k == k
then check the map to see if the value associated withk
is 2 or greater. If it is, then we have a pair(k, k)
. Output it. - Case 2: if
sum-k
is notk
then check the map to see ifsum-k
is in the map. If it is, then we have a pair(k, sum-k)
.
- Case 1: if
- The enumeration enumerates at most
n
keys, and each step is O(1), so this step is also O(n) - And we're done, with total cost O(n).
Now, can you implement this solution?
I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
– ChatterOne
Nov 21 at 20:48
1
@ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
– Eric Lippert
Nov 21 at 20:54
@ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
– Eric Lippert
Nov 21 at 20:58
@EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
– Brian
Nov 21 at 21:26
1
@Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
– Eric Lippert
Nov 21 at 21:29
|
show 1 more comment
up vote
6
down vote
So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:
- Get your input into a list, as you have done so. Obviously this is O(n)
- Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.
- Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).
- Now we take our target, call it
sum
and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key isk
. We computesum-k
. Now there are two cases.
- Case 1: if
sum-k == k
then check the map to see if the value associated withk
is 2 or greater. If it is, then we have a pair(k, k)
. Output it. - Case 2: if
sum-k
is notk
then check the map to see ifsum-k
is in the map. If it is, then we have a pair(k, sum-k)
.
- Case 1: if
- The enumeration enumerates at most
n
keys, and each step is O(1), so this step is also O(n) - And we're done, with total cost O(n).
Now, can you implement this solution?
I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
– ChatterOne
Nov 21 at 20:48
1
@ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
– Eric Lippert
Nov 21 at 20:54
@ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
– Eric Lippert
Nov 21 at 20:58
@EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
– Brian
Nov 21 at 21:26
1
@Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
– Eric Lippert
Nov 21 at 21:29
|
show 1 more comment
up vote
6
down vote
up vote
6
down vote
So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:
- Get your input into a list, as you have done so. Obviously this is O(n)
- Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.
- Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).
- Now we take our target, call it
sum
and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key isk
. We computesum-k
. Now there are two cases.
- Case 1: if
sum-k == k
then check the map to see if the value associated withk
is 2 or greater. If it is, then we have a pair(k, k)
. Output it. - Case 2: if
sum-k
is notk
then check the map to see ifsum-k
is in the map. If it is, then we have a pair(k, sum-k)
.
- Case 1: if
- The enumeration enumerates at most
n
keys, and each step is O(1), so this step is also O(n) - And we're done, with total cost O(n).
Now, can you implement this solution?
So far every solution given has been O(n2) or O(n log n), but there is an O(n) solution, which is sketched as follows:
- Get your input into a list, as you have done so. Obviously this is O(n)
- Create an empty map from integers to integers. The map must have an O(1) insertion operation and an O(1) contains-key operation and an O(1) lookup operation and an O(n) "enumerate all the keys" operation. Commonly-used map types in modern programming languages typically have these characteristics.
- Build a count of all the items in the input. That is, for each input item, check to see if it is in the map. If it is not, add the pair (item, 1) to the map. If it is already in the map, look up the associated value and change the map so that it has the pair (item, value + 1). All those operations are O(1) and we do them n times, so this step is O(n).
- Now we take our target, call it
sum
and we wish to enumerate the pairs which add to that target. Enumerate the keys of the map. Suppose the key isk
. We computesum-k
. Now there are two cases.
- Case 1: if
sum-k == k
then check the map to see if the value associated withk
is 2 or greater. If it is, then we have a pair(k, k)
. Output it. - Case 2: if
sum-k
is notk
then check the map to see ifsum-k
is in the map. If it is, then we have a pair(k, sum-k)
.
- Case 1: if
- The enumeration enumerates at most
n
keys, and each step is O(1), so this step is also O(n) - And we're done, with total cost O(n).
Now, can you implement this solution?
edited Nov 21 at 20:56
answered Nov 21 at 20:14
Eric Lippert
12.7k42746
12.7k42746
I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
– ChatterOne
Nov 21 at 20:48
1
@ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
– Eric Lippert
Nov 21 at 20:54
@ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
– Eric Lippert
Nov 21 at 20:58
@EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
– Brian
Nov 21 at 21:26
1
@Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
– Eric Lippert
Nov 21 at 21:29
|
show 1 more comment
I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
– ChatterOne
Nov 21 at 20:48
1
@ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
– Eric Lippert
Nov 21 at 20:54
@ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
– Eric Lippert
Nov 21 at 20:58
@EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
– Brian
Nov 21 at 21:26
1
@Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
– Eric Lippert
Nov 21 at 21:29
I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
– ChatterOne
Nov 21 at 20:48
I don't think that "most" map types have all the O(1) operations that you mention. They do have an amortized cost of O(1), though. Worst case scenario, they'll still be O(n)
– ChatterOne
Nov 21 at 20:48
1
1
@ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
– Eric Lippert
Nov 21 at 20:54
@ChatterOne: "Most" was not intended to imply that I had done a survey and that 50%+1 or more met the criterion; I was speaking informally. But I am curious: can you give an example of a popular language with a popular mutable dictionary type where typical worst-case performance with non-hostile input is O(n)? Particularly if the keys are small integers, as is the case here. The C#, Java, Python, JavaScript dictionaries all have O(1) insertion and lookup when given realistic, non-hostile inputs.
– Eric Lippert
Nov 21 at 20:54
@ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
– Eric Lippert
Nov 21 at 20:58
@ChatterOne: I note that the sorted key dictionary types and immutable dictionary types are typically balanced binaries trees behind the scenes, and so are O(lg n) on those operations, but here I am specifically interested in mutable unsorted dictionaries.
– Eric Lippert
Nov 21 at 20:58
@EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
– Brian
Nov 21 at 21:26
@EricLippert: Absent some other work-around, the O(n) insertion cost will be paid on each resize operation. Hence, ChatterOne is making the point that calling insertion an O(1) operation (i.e., without stating that this cost is amortized) is inaccurate even for non-hostile inputs. Per the last paragraph of Dictionary.Add remarks, C# has the same cost. However, C# can avoid this cost by setting the initial capacity, a feature Python lacks.
– Brian
Nov 21 at 21:26
1
1
@Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
– Eric Lippert
Nov 21 at 21:29
@Brian: Ah, I understand the point now, but I'm not sure why it is germane. First, because the algorithm I propose is O(n) whether or not the cost of a dictionary insertion is O(1) strictly, or O(1) amortized. And second, because as you note, we know ahead of time a bound on the dictionary size. And third, because we are cheerfully ignoring superlinearities in all manner of infrastructure, like the superlinear cost of garbage collections as data structures grow in an environment with collection pressure.
– Eric Lippert
Nov 21 at 21:29
|
show 1 more comment
up vote
2
down vote
Other minor suggestions:
- Don't leave your
input()
s blank. Pass a prompt so that the user knows what they're entering. - The first time you initialize
lst
, it doesn't need to be memory; it can be left as a generator (parens instead of brackets). - The second time you initialize
lst
, it does not need to be mutable, so make it atuple
instead of alist
.
add a comment |
up vote
2
down vote
Other minor suggestions:
- Don't leave your
input()
s blank. Pass a prompt so that the user knows what they're entering. - The first time you initialize
lst
, it doesn't need to be memory; it can be left as a generator (parens instead of brackets). - The second time you initialize
lst
, it does not need to be mutable, so make it atuple
instead of alist
.
add a comment |
up vote
2
down vote
up vote
2
down vote
Other minor suggestions:
- Don't leave your
input()
s blank. Pass a prompt so that the user knows what they're entering. - The first time you initialize
lst
, it doesn't need to be memory; it can be left as a generator (parens instead of brackets). - The second time you initialize
lst
, it does not need to be mutable, so make it atuple
instead of alist
.
Other minor suggestions:
- Don't leave your
input()
s blank. Pass a prompt so that the user knows what they're entering. - The first time you initialize
lst
, it doesn't need to be memory; it can be left as a generator (parens instead of brackets). - The second time you initialize
lst
, it does not need to be mutable, so make it atuple
instead of alist
.
answered Nov 21 at 14:17
Reinderien
1,392516
1,392516
add a comment |
add a comment |
up vote
2
down vote
You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:
head = 0
tail = len(list) - 1
while (head < tail):
sum = list[head] + list[tail]
if sum == num:
cnt += 1
head += 1
tail -= 1
elif sum > num:
tail -= 1
else:
head += 1
This results in an overall complexity of O(nlog(n))
Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.
New contributor
1
Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
– Eric Lippert
Nov 21 at 20:17
There is a linear time solution, which I've given in my answer. See if you can find it!
– Eric Lippert
Nov 21 at 20:17
@EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
– David
2 days ago
add a comment |
up vote
2
down vote
You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:
head = 0
tail = len(list) - 1
while (head < tail):
sum = list[head] + list[tail]
if sum == num:
cnt += 1
head += 1
tail -= 1
elif sum > num:
tail -= 1
else:
head += 1
This results in an overall complexity of O(nlog(n))
Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.
New contributor
1
Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
– Eric Lippert
Nov 21 at 20:17
There is a linear time solution, which I've given in my answer. See if you can find it!
– Eric Lippert
Nov 21 at 20:17
@EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
– David
2 days ago
add a comment |
up vote
2
down vote
up vote
2
down vote
You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:
head = 0
tail = len(list) - 1
while (head < tail):
sum = list[head] + list[tail]
if sum == num:
cnt += 1
head += 1
tail -= 1
elif sum > num:
tail -= 1
else:
head += 1
This results in an overall complexity of O(nlog(n))
Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.
New contributor
You can use a sorting algorithm to first sort the list, this can be done (in many ways) with a complexity of O(nlog(n)).
Once the list is sorted (small to large for example), the problem can be solved with complexity O(n) as followed:
head = 0
tail = len(list) - 1
while (head < tail):
sum = list[head] + list[tail]
if sum == num:
cnt += 1
head += 1
tail -= 1
elif sum > num:
tail -= 1
else:
head += 1
This results in an overall complexity of O(nlog(n))
Note : This runs under the assumption that all elements are unique. This can be easily fixed depending on how you want to handle cases with duplicate elements.
New contributor
New contributor
answered Nov 21 at 16:16
David
211
211
New contributor
New contributor
1
Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
– Eric Lippert
Nov 21 at 20:17
There is a linear time solution, which I've given in my answer. See if you can find it!
– Eric Lippert
Nov 21 at 20:17
@EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
– David
2 days ago
add a comment |
1
Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
– Eric Lippert
Nov 21 at 20:17
There is a linear time solution, which I've given in my answer. See if you can find it!
– Eric Lippert
Nov 21 at 20:17
@EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
– David
2 days ago
1
1
Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
– Eric Lippert
Nov 21 at 20:17
Since the sample given by the original poster indicates that the elements are not unique, you do need to handle duplicates.
– Eric Lippert
Nov 21 at 20:17
There is a linear time solution, which I've given in my answer. See if you can find it!
– Eric Lippert
Nov 21 at 20:17
There is a linear time solution, which I've given in my answer. See if you can find it!
– Eric Lippert
Nov 21 at 20:17
@EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
– David
2 days ago
@EricLippert With an ideal map implementation there is a linear time solution. Unfortunately python's map does not guarantee constant time reads or writes, the worst case time for reads and writes are linear with respect to the size of the map which makes the worst case total time for an algorithm using maps to be O(n^2).
– David
2 days ago
add a comment |
phwt is a new contributor. Be nice, and check out our Code of Conduct.
phwt is a new contributor. Be nice, and check out our Code of Conduct.
phwt is a new contributor. Be nice, and check out our Code of Conduct.
phwt is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f208138%2ffind-every-possible-pair-of-numbers-that-sum-up-to-n-number%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
3
If you store them in a
set
you can, for each number entered, find ifamount - number
is in the set. If you ask user to enteramount
BEFORE entering all the numbers, you can do it in realtime so that by the moment user enters the last number you'll already know all the appropriate pairs.– bipll
Nov 21 at 13:37
@bipll Will try, Thanks!
– phwt
Nov 21 at 13:48
1
@phwt - that's true only if no numbers appear more than once. There's a bit more work involved in the general case - see my answer.
– Toby Speight
Nov 21 at 16:28
Are the numbers in the list all non-negative integers? Do you know anything about the maximum value of those integers?
– Eric Lippert
Nov 21 at 20:01