This is the array version of sorted duplicate compaction. Because the input is sorted, duplicates appear in blocks, which makes one-pass in-place deduplication possible.
Problem 1: Remove Duplicates from Sorted Array
Problem description:
Given a sorted array nums, remove the duplicates in place so that each unique value appears only once, and return the number of unique elements.
What we are solving actually:
We are not cleaning the whole array in the sense of physically shrinking it. We are building a valid unique prefix at the front. Only the first k positions matter after the operation.
What we are doing actually:
- Let
slowmark the end of the unique prefix. - Let
fastscan the array from left to right. - When
nums[fast]is a new value, grow the unique prefix and copy it there. - Return the length of that prefix as
slow + 1.
class Solution {
public int removeDuplicates(int[] nums) {
if (nums.length == 0) {
return 0;
}
int slow = 0;
for (int fast = 1; fast < nums.length; fast++) {
if (nums[fast] != nums[slow]) {
slow++;
nums[slow] = nums[fast]; // Write the next unique value at the end of the unique prefix.
}
}
return slow + 1;
}
}
Debug steps:
- print
slow,fast, and the prefixnums[0..slow]after each unique write - test
[],[1,1,1], and[0,0,1,1,1,2,2,3,3,4] - verify the invariant that
nums[0..slow]always contains the unique values seen so far in sorted order
Why Sorted Order Makes This Easy
If the array were unsorted, duplicates could appear anywhere and we would need a set or sorting first.
Because the array is sorted:
- all equal values are adjacent
- once
fastmoves past a value block, we never need to revisit it
That turns the problem into simple compaction:
- keep one copy
- skip the rest
Dry Run
Input: [0,0,1,1,1,2,2,3,3,4]
- start:
slow=0and unique prefix is[0] fast=1, value0-> duplicate, skipfast=2, value1-> new value incrementslowto1writenums[1] = 1fast=3,4, values1-> duplicates, skipfast=5, value2-> new value write at next unique slot- continue similarly
Final unique prefix:
[0,1,2,3,4]
Return value:
5
Important Output Rule
After the function returns k:
- only indices
0..k-1are guaranteed meaningful
The rest of the array may still contain old values. That is normal and allowed by the problem.
Common Mistakes
- forgetting the empty-array check
- expecting the entire array after
kto be cleaned - comparing with
nums[fast - 1]after in-place writes instead of with the confirmed unique boundary - not recognizing that this is prefix compaction
Complexity
- Time:
O(n) - Space:
O(1)
Key Takeaways
- sorted order turns deduplication into one-pass prefix compaction
slowmarks the end of the unique prefix- the answer is the new prefix length, not a resized array
Pattern Extension
One good review question for remove duplicates from sorted array in java is whether the same invariant still holds when the input becomes degenerate: empty arrays, repeated values, already-sorted data, or the smallest possible string. That quick pressure test usually reveals whether we truly understood the pattern or only copied the happy path.
Comments