Convert 1D Array Into 2D Array - Problem
You are given a 0-indexed 1-dimensional (1D) integer array original, and two integers, m and n. You are tasked with creating a 2-dimensional (2D) array with m rows and n columns using all the elements from original.
The elements from indices 0 to n - 1 (inclusive) of original should form the first row of the constructed 2D array, the elements from indices n to 2 * n - 1 (inclusive) should form the second row of the constructed 2D array, and so on.
Return an m x n 2D array constructed according to the above procedure, or an empty 2D array if it is impossible.
Input & Output
Example 1 — Basic Conversion
$
Input:
original = [1,2,3,4], m = 2, n = 2
›
Output:
[[1,2],[3,4]]
💡 Note:
Elements 0-1 form first row [1,2], elements 2-3 form second row [3,4], creating a 2×2 matrix
Example 2 — Impossible Case
$
Input:
original = [1,2,3], m = 1, n = 5
›
Output:
[]
💡 Note:
Need 1×5 = 5 elements but only have 3 elements, so return empty array
Example 3 — Single Row
$
Input:
original = [1,2], m = 1, n = 2
›
Output:
[[1,2]]
💡 Note:
All elements fit in one row: [1,2] becomes [[1,2]]
Constraints
- 1 ≤ original.length ≤ 5 × 104
- 1 ≤ original[i] ≤ 105
- 1 ≤ m, n ≤ 4 × 104
Visualization
Tap to expand
Understanding the Visualization
1
Input Validation
Check if original.length == m × n
2
Element Mapping
Map each 1D index to 2D position using division and modulo
3
Result Formation
Create m×n 2D array with elements in correct positions
Key Takeaway
🎯 Key Insight: Use mathematical mapping (i/n, i%n) to convert linear indices to 2D coordinates efficiently
💡
Explanation
AI Ready
💡 Suggestion
Tab
to accept
Esc
to dismiss
// Output will appear here after running code