# IOICJ 91 Printer ( DP, 1D / 1D Convex Optimization )

Problem Description:
You have a book with N pages in total. Page i requires cost A[ i ] to be printed. You have a weird printer, the cost of printing a continuous segment [ L, R ) is ( Sigma{ A[ i ], L ≤ i < R } ^3 + K ). You want to know the minimum total cost of printing all the pages in the book.

Constraints:
1≤T≤100
1≤N≤1e5
1≤K≤1e12
0≤ cost of any page ≤10
Sigma{ N } ≤ 1e6
TL: 1000 ms

Sample Input:
2
3 5
2 4 1
5 514
3 8 2 0 1

Sample Output:
88
2108

Solution:
pfx[ i ] : Sigma{ A[ j ], 0 ≤ j < i }
dp[ i ] : having finished printing first i pages, current minimum total cost
dp[ i ] = min{ dp[ j ] + ( pfx[ i ] - pfx[ j ] )^3 + K, 0 ≤ j < i }
It is clear that all we could do is some how get rid of the O( N ) factor in updating each dp[ i ].
Here it comes 1D / 1D convex / concave optimization. If one can prove ( with Monge Condition etc. ) that the transition is decision-monotonous, this optimization can make the complexity of the transition become O( lg N ). Note that even if the transition does not imply Monge Condition, it is still possible that the transition is decision-monotonous. Decision-monotonous implies that if the best transition source for dp[ i ] is some dp[ j ], and there is some dp[ k ], where k > i, such that the best transition source is dp[ m ], where m != j, then m > j must hold. If this property is asserted, we could accelerate the transition process by maintaining a deque that stores segments for every best decision that is known. For example, in the beginning we only know dp[ 0 ], therefore the deque initially stores information [ 0, 1, N ] which implies: currently dp[ 0 ] is the best transition source for dp[ 1, N ). Then, we traverse to dp[ 1 ], knowing that currently dp[ 0 ] is the best source, dp[ 1 ] is updated by dp[ 0 ]. Then we would look at the segment stored in the back of the deque, which is currently [ 0, 1, N ], and check whether transition source dp[ 1 ] is better than dp[ 0 ] for the first element element in that segment. If it is, do a pop_back and start looking at the last segment of the current deque again. Otherwise, binary search the first index x in that segment such that dp[ 1 ] is better than dp[ 0 ], compress that segment to x ), and push_back [ 1, x, N ]. See the code below for further comprehension.

```#include <bits/stdc++.h>
using namespace std;

typedef long long ll;

const int MAXN = ( int ) 1e5;

int T;

int N;
ll K;
int A[ MAXN ];

void init(){
cin >> N >> K;
for( int i = 0; i < N; ++i ){
cin >> A[ i ];
}
}

int pfx[ MAXN + 1 ];
ll dp[ MAXN + 1 ];

ll cube( int v ){ return 1LL * v * v * v; }
ll get_cost( int x, int rb ){
return dp[ x ] + cube( pfx[ rb ] - pfx[ x ] ) + K;
}

void preprocess(){
for( int i = 0; i < N; ++i ){
pfx[ i + 1 ] = pfx[ i ] + A[ i ];
}
deque< tuple< int, int, int > > dcs; // decision
dcs.emplace_back( 0, 1, N ); // describes: transition from dp[ 0 ] is effective for [ 1, N ]
for( int i = 1; i <= N; ++i ){
while( get< 2 >( dcs.front() ) < i ) dcs.pop_front(); // right bound is out-dated
dp[ i ] = get_cost( get< 0 >( dcs.front() ), i ); // best transition is A[ dcs.top(), i )
while( not dcs.empty() ){
int x, lb, rb; tie( x, lb, rb ) = dcs.back();
if( lb <= i ) break; // will be pop_front-ed soon anyway
if( get_cost( x, lb ) >= get_cost( i, lb ) ){
dcs.pop_back();
if( not dcs.empty() ){
get< 2 >( dcs.back() ) = N;
}
} else{
break;
}
}
int best = -1;
for( int lb = i + 1, rb = N, x = get< 0 >( dcs.back( ) ); lb <= rb; ){
int mid = lb + rb >> 1;
if( get_cost( x, mid ) > get_cost( i, mid ) ){
best = mid;
rb = mid - 1;
} else{
lb = mid + 1;
}
}
if( best == -1 ) continue;
get< 2 >( dcs.back() ) = best - 1;
dcs.emplace_back( i, best, N );
}
}

void solve(){
cout << dp[ N ] << endl;
}

signed main(){
ios::sync_with_stdio( 0 );
cin >> T;
for( int ti = 0; ti < T; ++ti ){
init();
preprocess();
solve();
}
return 0;
}
```