Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding BetaBernoulli distribution with LogScore #132

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions ngboost/distns/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,3 +4,4 @@
from .lognormal import LogNormal
from .exponential import Exponential
from .categorical import k_categorical, Bernoulli
from .betabernoulli import BetaBernoulli
125 changes: 125 additions & 0 deletions ngboost/distns/betabernoulli.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
from scipy.stats import betabinom as dist
from scipy.stats import beta as betadist
import numpy as np
from ngboost.distns.distn import RegressionDistn
from ngboost.scores import LogScore
from scipy.special import polygamma, gamma, digamma
from scipy.special import beta as betafunction
from fastbetabino import *
guyko81 marked this conversation as resolved.
Show resolved Hide resolved
from array import array
import sys
class BetaBernoulliLogScore(LogScore):

def score(self, Y):
return -self.dist.logpmf(Y)

def d_score(self, Y):
D = np.zeros((len(Y), 2)) # first col is dS/d(log(α)), second col is dS/d(log(β))

D[:, 0] = -self.alpha * (
digamma(self.alpha + self.beta) +
digamma(Y + self.alpha) -
digamma(self.alpha + self.beta + 1) -
digamma(self.alpha)
)
D[:, 1] = -self.beta * (
digamma(self.alpha + self.beta) +
digamma(-Y + self.beta + 1) -
digamma(self.alpha + self.beta + 1) -
digamma(self.beta)
)
return D

def metric(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How did you derive this? In my derivation the Fisher Information is not diagonal.

Here's what I have; it'd be great if you could paste your independent derivation so that we can double check.

Screen Shot 2020-06-19 at 4 04 26 PM

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made a mistake not including the other diagonal. My calculation was based on the definition of the FI matrix as the variance of the score. Therefore I just simply squared the gradient (but forgot that it's actually a vector and the square should be S*S.T).
Can we use the double derivative? Are we using this:
Claim: The negative expected Hessian of log likelihood is equal to the Fisher Information Matrix F.
https://wiseodd.github.io/techblog/2018/03/11/fisher-information/

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As per my calculation the last row is different, it's
formula
However when I use that formula the model doesn't work - it drops the singular matrix error again. And when I include the full FI matrix from the variance definition it also drops the singular matrix error. When I use the diagonal matrix from the Hessian definition it simply doesn't learn.
So the only working solution is the diagonal matrix from the variance definition. I have no clue why.
If you want to try the different approaches I have updated the code. All you need to do is to comment out the other diagonal or to comment out the other metric definition. For now I keep the working version as my pull request, but please check my code as I'm not sure I didn't miss something or didn't do a typo again.

FI = np.zeros((self.alpha.shape[0], 2, 2))
FI[:, 0, 0] = ((self.alpha * (
digamma(self.alpha + self.beta) +
digamma(0 + self.alpha) -
digamma(self.alpha + self.beta + 1) -
digamma(self.alpha)
))**2 * self.dist.pmf(0) +
(self.alpha * (
digamma(self.alpha + self.beta) +
digamma(1 + self.alpha) -
digamma(self.alpha + self.beta + 1) -
digamma(self.alpha)
))**2 * self.dist.pmf(1))
FI[:, 1, 1] = ((self.beta * (
digamma(self.alpha + self.beta) +
digamma(-0 + self.beta + 1) -
digamma(self.alpha + self.beta + 1) -
digamma(self.beta)
))**2 * self.dist.pmf(0) +
(self.beta * (
digamma(self.alpha + self.beta) +
digamma(-1 + self.beta + 1) -
digamma(self.alpha + self.beta + 1) -
digamma(self.beta)
))**2 * self.dist.pmf(1))
return FI
class BetaBernoulli(RegressionDistn):

n_params = 2
scores = [BetaBernoulliLogScore]

def __init__(self, params):
# save the parameters
self._params = params

# create other objects that will be useful later
self.log_alpha = params[0]
self.log_beta = params[1]
self.alpha = np.exp(self.log_alpha)
self.beta = np.exp(self.log_beta)
self.dist = dist(n=1, a=self.alpha, b=self.beta)

def sigmoid(self, x):
guyko81 marked this conversation as resolved.
Show resolved Hide resolved
return 1/(1+np.exp(-x))

def fit(Y):

def fit_alpha_beta_py(impressions, clicks, alpha0=1.5, beta0=5, niter=1000):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we clean this function up a bit? In particular it's not clear what impressions / clicks are supposed to be. If impressions is going to be a vector of ones in all cases maybe we can remove it as an argument?

Also it'd be great if we could apply black formatting.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cleared the function

# based on https://github.com/lfiaschi/fastbetabino/blob/master/fastbetabino.pyx

alpha_old=alpha0
beta_old=beta0

for it in range(niter):

alpha=alpha_old*\
(sum(digamma(c + alpha_old) - digamma(alpha_old) for c,i in zip(clicks,impressions)))/\
(sum(digamma(i + alpha_old+beta_old) - digamma(alpha_old+beta_old) for c,i in zip(clicks,impressions)))


beta=beta_old*\
(sum(digamma(i-c + beta_old) - digamma(beta_old) for c,i in zip(clicks,impressions)))/\
(sum(digamma(i + alpha_old+beta_old) - digamma(alpha_old+beta_old) for c,i in zip(clicks,impressions)))


#print('alpha {} | {} beta {} | {}'.format(alpha,alpha_old,beta,beta_old))
sys.stdout.flush()

if np.abs(alpha-alpha_old) and np.abs(beta-beta_old)<1e-10:
#print('early stop')
break

alpha_old=alpha
beta_old=beta

return alpha, beta

imps = np.ones_like(Y)
alpha, beta = fit_alpha_beta_py(imps, Y) # use scipy's implementation
return np.array([np.log(alpha), np.log(beta)])

def sample(self, m):
return np.array([self.dist.rvs() for i in range(m)])

def __getattr__(self, name): # gives us access to Laplace.mean() required for RegressionDist.predict()
if name in dir(self.dist):
return getattr(self.dist, name)
return None

@property
def params(self):
return {'alpha':self.alpha, 'beta':self.beta}