Problem Description
Given a 0-indexed integer array nums and an integer k, you start at index 0. In each move, you can jump to any index in the range [i+1, min(n-1, i+k)]. The goal is to reach the last index (n-1) with the maximum possible score, where the score is the sum of all nums[j] at the indices visited. Return the maximum score obtainable.
Key Insights
- The problem can be modeled using dynamic programming where dp[i] represents the maximum score to reach index i.
- The recurrence becomes dp[i] = nums[i] + max(dp[j]) for all indices j such that i-k ≤ j < i.
- A monotonic queue can be used to maintain the maximum dp value in the sliding window efficiently.
- Removing indices that fall outside the current window is crucial to ensure consistent window size.
Space and Time Complexity
Time Complexity: O(n) because each element is processed at most twice. Space Complexity: O(n) for the dp array and the deque, though the deque holds at most k elements.
Solution
We use a dynamic programming approach where we define dp[i] as the maximum score achievable when reaching index i. The transition is defined as dp[i] = nums[i] + max(dp[i-1], dp[i-2], ..., dp[i-k]).
To efficiently get the maximum in the sliding window of size k, we use a monotonic deque that stores indices in decreasing order of their dp values. For every index i:
- Remove indices from the front if they are out-of-window (i.e., index < i - k).
- The front of the deque gives the index of the maximum dp value in the current window.
- Calculate dp[i] using dp[deque.front()] + nums[i].
- Before adding i to the deque, remove all indices from the back whose dp values are less than dp[i] to maintain a decreasing order.
This method ensures that each dp value is computed in constant amortized time.