Skip to content

322. Coin Change

Description

Difficulty: Medium

You are given an integer array coins representing coins of different denominations and an integer amount representing a total amount of money.

Return the fewest number of coins that you need to make up that amount. If that amount of money cannot be made up by any combination of the coins, return -1.

You may assume that you have an infinite number of each kind of coin.

Example 1:

Input: coins = [1,2,5], amount = 11
Output: 3
Explanation: 11 = 5 + 5 + 1

Example 2:

Input: coins = [2], amount = 3
Output: -1

Example 3:

Input: coins = [1], amount = 0
Output: 0

Constraints:

  • 1 <= coins.length <= 12
  • 1 <= coins[i] <= 231 - 1
  • 0 <= amount <= 104

Solution

The idea behind this solution is Dynamic Programming (DP), specifically a bottom-up approach (also known as tabulation). The essence of this technique is to solve a complex problem by breaking it down into smaller, overlapping subproblems and solving each subproblem only once, storing its result.

Here's how that idea applies to Coin Change:

  1. Breaking Down the Problem: Instead of directly figuring out the minimum coins for the amount, we ask: "What's the minimum coins needed for 1? Then for 2? ... up to amount?" Each dp[i] represents the answer to the subproblem of making change for amount i.

  2. Building Up the Solution (Bottom-Up): We start with the simplest subproblem (dp[0] = 0, no coins for no amount). Then, for each subsequent amount i, we consider all possible coins we could use to reach i. If we use a coin, the problem reduces to finding the minimum coins for i - coin. Since we've already calculated dp[i - coin] (because i - coin is smaller than i), we just add 1 (for the current coin) to that known minimum.

  3. Optimization/Memoization (Implicit): By storing the results for each dp[i] and reusing them, we avoid redundant calculations. For example, if both amount - coinA and amount - coinB depend on the minimum coins for X, dp[X] is computed only once. This is why DP is far more efficient than naive recursion for problems with overlapping subproblems. The Math.min operation ensures we always pick the optimal (minimum) path to reach dp[i].

This pattern of building up solutions from smaller, known subproblems to larger, unknown ones is fundamental to dynamic programming.

Code

Java
import java.util.Arrays;
class Solution {
    public int coinChange(int[] coins, int amount) {
        int[] dp = new int[amount + 1];

        Arrays.fill(dp, amount + 1);
        dp[0] = 0;

        for (int i = 1; i <= amount; ++i) {
            for (int coin : coins) {
                if (coin <= i) {
                    dp[i] = Math.min(dp[i], dp[i - coin] + 1);
                }
            }
        }

        return dp[amount] > amount ? -1 : dp[amount];
    }
}

Time Complexity: O(amount * num_coins)
The core idea involves iterating through every possible amount from 1 to amount and, for each amount, considering every available coin. This directly translates to a nested loop structure, resulting in a linear dependency on both the target amount and the coins array's length.

Space Complexity: O(amount)
We use a dp array whose size directly scales with the amount. This array stores the minimum coin counts for all sub-amounts from 0 up to the target amount, enabling efficient lookups and avoiding recomputation.