question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

PiecewiseExponentialFitter seems to be broken

See original GitHub issue

Hi i was running your quickstart example to familiarize with the program and there seems to be an issue generating the broken arrays for the piecewise fitting


from lifelines.datasets import load_waltons
df = load_waltons() # returns a Pandas DataFrame
T = df['T']
E = df['E']

from lifelines import *

fig, axes = plt.subplots(2, 3, figsize=(9, 5))

kmf = KaplanMeierFitter().fit(T, E, label='KaplanMeierFitter')
wbf = WeibullFitter().fit(T, E, label='WeibullFitter')
exf = ExponentialFitter().fit(T, E, label='ExponentalFitter')
lnf = LogNormalFitter().fit(T, E, label='LogNormalFitter')
llf = LogLogisticFitter().fit(T, E, label='LogLogisticFitter')
# pwf = PiecewiseExponentialFitter([40, 60]).fit(T, E, label='PiecewiseExponentialFitter')

wbf.plot_survival_function(ax=axes[0][0])
exf.plot_survival_function(ax=axes[0][1])
lnf.plot_survival_function(ax=axes[0][2])
kmf.plot_survival_function(ax=axes[1][0])
llf.plot_survival_function(ax=axes[1][1])
# pwf.plot_survival_function(ax=axes[1][2])

works, but if I uncomment

 pwf = PiecewiseExponentialFitter([40, 60]).fit(T, E, label='PiecewiseExponentialFitter')

the follwoing error is raised -

Summary: ValueError: cannot reshape array of size 0 into shape (0)

Whole trace -

ValueError                                Traceback (most recent call last)
<ipython-input-9-109e19de1150> in <module>
     29 lnf = LogNormalFitter().fit(T, E, label='LogNormalFitter')
     30 llf = LogLogisticFitter().fit(T, E, label='LogLogisticFitter')
---> 31 pwf = PiecewiseExponentialFitter([40, 60]).fit(T, E, label='PiecewiseExponentialFitter')
     32 
     33 wbf.plot_survival_function(ax=axes[0][0])

/opt/conda/lib/python3.6/site-packages/lifelines/utils/__init__.py in f(self, *args, **kwargs)
     43         def f(self, *args, **kwargs):
     44             self._censoring_type = cls.RIGHT
---> 45             return function(self, *args, **kwargs)
     46 
     47         return f

/opt/conda/lib/python3.6/site-packages/lifelines/fitters/__init__.py in fit(self, durations, event_observed, timeline, label, alpha, ci_labels, show_progress, entry, weights, left_censorship)
    710             show_progress=show_progress,
    711             entry=entry,
--> 712             weights=weights,
    713         )
    714 

/opt/conda/lib/python3.6/site-packages/lifelines/fitters/__init__.py in _fit(self, Ts, event_observed, timeline, label, alpha, ci_labels, show_progress, entry, weights)
    888         # estimation
    889         self._fitted_parameters_, self._log_likelihood, self._hessian_ = self._fit_model(
--> 890             Ts, self.event_observed.astype(bool), self.entry, self.weights, show_progress=show_progress
    891         )
    892 

/opt/conda/lib/python3.6/site-packages/lifelines/fitters/__init__.py in _fit_model(self, Ts, E, entry, weights, show_progress)
    511                 args=(Ts, E, entry, weights),
    512                 bounds=self._bounds,
--> 513                 options={"disp": show_progress},
    514             )
    515 

/opt/conda/lib/python3.6/site-packages/scipy/optimize/_minimize.py in minimize(fun, x0, args, method, jac, hess, hessp, bounds, constraints, tol, callback, options)
    601     elif meth == 'l-bfgs-b':
    602         return _minimize_lbfgsb(fun, x0, args, jac, bounds,
--> 603                                 callback=callback, **options)
    604     elif meth == 'tnc':
    605         return _minimize_tnc(fun, x0, args, jac, bounds, callback=callback,

/opt/conda/lib/python3.6/site-packages/scipy/optimize/lbfgsb.py in _minimize_lbfgsb(fun, x0, args, jac, bounds, disp, maxcor, ftol, gtol, eps, maxfun, maxiter, iprint, callback, maxls, **unknown_options)
    333             # until the completion of the current minimization iteration.
    334             # Overwrite f and g:
--> 335             f, g = func_and_grad(x)
    336         elif task_str.startswith(b'NEW_X'):
    337             # new iteration

/opt/conda/lib/python3.6/site-packages/scipy/optimize/lbfgsb.py in func_and_grad(x)
    283     else:
    284         def func_and_grad(x):
--> 285             f = fun(x, *args)
    286             g = jac(x, *args)
    287             return f, g

/opt/conda/lib/python3.6/site-packages/scipy/optimize/optimize.py in function_wrapper(*wrapper_args)
    291     def function_wrapper(*wrapper_args):
    292         ncalls[0] += 1
--> 293         return function(*(wrapper_args + args))
    294 
    295     return ncalls, function_wrapper

/opt/conda/lib/python3.6/site-packages/scipy/optimize/optimize.py in __call__(self, x, *args)
     61     def __call__(self, x, *args):
     62         self.x = numpy.asarray(x).copy()
---> 63         fg = self.fun(x, *args)
     64         self.jac = fg[1]
     65         return fg[0]

/opt/conda/lib/python3.6/site-packages/autograd/wrap_util.py in nary_f(*args, **kwargs)
     18             else:
     19                 x = tuple(args[i] for i in argnum)
---> 20             return unary_operator(unary_f, x, *nary_op_args, **nary_op_kwargs)
     21         return nary_f
     22     return nary_operator

/opt/conda/lib/python3.6/site-packages/autograd/differential_operators.py in value_and_grad(fun, x)
    128     in scipy.optimize"""
    129     vjp, ans = _make_vjp(fun, x)
--> 130     return ans, vjp(vspace(ans).ones())
    131 
    132 def grad_and_aux(fun, argnum=0):

/opt/conda/lib/python3.6/site-packages/autograd/core.py in vjp(g)
     12         def vjp(g): return vspace(x).zeros()
     13     else:
---> 14         def vjp(g): return backward_pass(g, end_node)
     15     return vjp, end_value
     16 

/opt/conda/lib/python3.6/site-packages/autograd/core.py in backward_pass(g, end_node)
     19     for node in toposort(end_node):
     20         outgrad = outgrads.pop(node)
---> 21         ingrads = node.vjp(outgrad[0])
     22         for parent, ingrad in zip(node.parents, ingrads):
     23             outgrads[parent] = add_outgrads(outgrads.get(parent), ingrad)

/opt/conda/lib/python3.6/site-packages/autograd/core.py in <lambda>(g)
     59                     "VJP of {} wrt argnum 0 not defined".format(fun.__name__))
     60             vjp = vjpfun(ans, *args, **kwargs)
---> 61             return lambda g: (vjp(g),)
     62         elif L == 2:
     63             argnum_0, argnum_1 = argnums

/opt/conda/lib/python3.6/site-packages/autograd/numpy/numpy_vjps.py in <lambda>(g)
    355 def dot_vjp_1(ans, A, B):
    356     A_ndim, B_ndim = anp.ndim(A), anp.ndim(B)
--> 357     return lambda g: dot_adjoint_1(A, g, A_ndim, B_ndim)
    358 defvjp(anp.dot, dot_vjp_0, dot_vjp_1)
    359 

/opt/conda/lib/python3.6/site-packages/autograd/tracer.py in f_wrapped(*args, **kwargs)
     46             return new_box(ans, trace, node)
     47         else:
---> 48             return f_raw(*args, **kwargs)
     49     f_wrapped.fun = f_raw
     50     f_wrapped._is_autograd_primitive = True

/opt/conda/lib/python3.6/site-packages/autograd/numpy/numpy_vjps.py in dot_adjoint_1(A, G, A_ndim, B_ndim)
    347     else:
    348         return swap(onp.tensordot(
--> 349             G, A, [range(-A_ndim - B_ndim + 2, -B_ndim + 1), range(A_ndim - 1)]))
    350 
    351 def dot_vjp_0(ans, A, B):

/opt/conda/lib/python3.6/site-packages/numpy/core/numeric.py in tensordot(a, b, axes)
   1335     oldb = [bs[axis] for axis in notin]
   1336 
-> 1337     at = a.transpose(newaxes_a).reshape(newshape_a)
   1338     bt = b.transpose(newaxes_b).reshape(newshape_b)
   1339     res = dot(at, bt)

ValueError: cannot reshape array of size 0 into shape (0)

Issue Analytics

  • State:closed
  • Created 4 years ago
  • Comments:9 (3 by maintainers)

github_iconTop GitHub Comments

1reaction
ayadlincommented, Jul 15, 2019

Upgrading to 1.14 solved he issue!

Thanks-

A

On Mon, Jul 15, 2019 at 11:04 AM Alejandro Wolf Yadlin alewolf@gmail.com wrote:

Will do and will let you know!

Thanks so much!

Ale

On Mon, Jul 15, 2019 at 11:03 AM Cameron Davidson-Pilon < notifications@github.com> wrote:

Ah, investigating this I have found inconsistent numpy version pinning. In our setup.py, we suggest numpy>=1.6, but in our requirements we suggest numpy>=1.14.0.

The latter is correct, as that’s also what we test against. Can you try upgrading to numpy>=1.14.0?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/CamDavidsonPilon/lifelines/issues/774?email_source=notifications&email_token=ACWHIFYCILRVLYHOIJ3O25DP7S3WPA5CNFSM4ICS5YMKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZ6P3ZI#issuecomment-511507941, or mute the thread https://github.com/notifications/unsubscribe-auth/ACWHIF3LLO4IX4LLJC2LI7DP7S3WPANCNFSM4ICS5YMA .

– - Alejandro -

– - Alejandro -

0reactions
ayadlincommented, Jul 15, 2019

Will do and will let you know!

Thanks so much!

Ale

On Mon, Jul 15, 2019 at 11:03 AM Cameron Davidson-Pilon < notifications@github.com> wrote:

Ah, investigating this I have found inconsistent numpy version pinning. In our setup.py, we suggest numpy>=1.6, but in our requirements we suggest numpy>=1.14.0.

The latter is correct, as that’s also what we test against. Can you try upgrading to numpy>=1.14.0?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/CamDavidsonPilon/lifelines/issues/774?email_source=notifications&email_token=ACWHIFYCILRVLYHOIJ3O25DP7S3WPA5CNFSM4ICS5YMKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZ6P3ZI#issuecomment-511507941, or mute the thread https://github.com/notifications/unsubscribe-auth/ACWHIF3LLO4IX4LLJC2LI7DP7S3WPANCNFSM4ICS5YMA .

– - Alejandro -

Read more comments on GitHub >

github_iconTop Results From Across the Web

lifelines/Piecewise Exponential Models and Creating Custom ...
This model does a poor job of fitting to our data. If I fit a non-parametric model, like the Nelson-Aalen model, to this...
Read more >
Piecewise exponential models to assess the influence of job ...
Piecewise exponential models may be particularly useful in modeling risk of injury as a function of experience and have the additional benefit ...
Read more >
8.8 - Piecewise Linear Regression Models | STAT 501
We discuss what is called "piecewise linear regression models" here because they utilize interaction terms containing dummy variables.
Read more >
Piecewise exponential models with overlapping time periods
I would like to fit a piecewise exponential model using survival data. However, many of the event times are interval-censored with ...
Read more >
Piecewise Linear Regression Model. What Is It and When Can ...
Although it's fairly simple to solve linear regressions, one question still remains: how to find the break points? For me, that's where I...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found