bayesian's questions - Chinese 1answer

4.625 bayesian questions.

I'm a bit new to Bayesian statistics so please bear with me if this question is trivial. Let's say I have $100$ observations for $2$ Bernoulli variables $X$ and $Y$. I notice that they have the ...

I'm using the abcpmc code: Approximate Bayesian Computing (ABC) Population Monte Carlo (PMC) implementation based on Sequential Monte Carlo (SMC) with Particle Filtering techniques. described in ...

Wikipedia has a section on the Bayes Estimator. https://en.wikipedia.org/wiki/Bayes_estimator Isn't Bayes Estimator simply the value of the parameter that minimizes the expected loss of a loss ...

Just started to study Bayesian Statistics. I am very confused the concept of having a conditional probability on a distribution. Specifically: I understand what p( A | B ) where A="I am sick" and ...

The survival times, in days, of patients diagnosed with a severe form of a terminal illness are thought to be well modelled by an exponential($\theta$) distribution. We observe the survival times ...

I recently stumbled upon de Finetti`s (pretty cool) representation theorem (What is so cool about de Finetti's representation theorem?). I wondered whether the RV $\Theta$ that arises in this ...

In Bayesian Regression, I am confused how to to get $f*$ and $\sigma*$, given $$y^∗ \mid \vec{y}\sim\mathcal{N}(f^∗ , σ^∗ )$$ $$ p(y^* \mid \vec{y}) = \int{p(y^* \mid \vec{w}) p(\vec{w} \mid \vec{y})...

Bayesian Data Analysis (p. 64) says, regarding a normal model: a sensible vague prior density for $\mu$ and $\sigma$, assuming prior independence of location and scale parameters, is uniform on $(\...

I have a multidimensional likelihood, some parameters are covariance matrices. I'm looking for general pointers on constructing a proposal density for a Metropolis-Hastings Algorithm, from a ...

So I want to fit a mixture model $$f(y) = \pi_1 f_1 (y) + \pi_2 f_2 (y)$$ where $\pi_k = P(S = k)$ and $S_i$ is a latent unobserved variable. I assume that, conditional on $S=k$, we have the model ...

Consider a supervised learning prediction task where we have some real-valued feature vector X and wish to train a model that predicts discrete class label Y. When the model is deployed, Y will be ...

Let's imagine I have the following equation $y_t=f(x_t)+e_t$, where $f(x)$ follows a gaussian process, and $e_t\sim N(0,\Sigma)$. How does one go about to learn the hyperparameters, i.e., $\Sigma$ ...

I'm trying to learn bayesian statistics from "Statistical rethinking" by Richard McElreath. In chapter 4, a model with Gaussian distribution of heights is introduced: $h_i \sim N(\mu, \sigma)$ $\mu \...

As an exercise to learn how to manually code MCMC, I've built a Metropolis-Hastings sampler on top of a multinomial-Dirichlet posterior distribution. Since a closed form solution exists, I can compare ...

I have a questions about the mixture of conjugate priors. I learnt and say the mixture of conjugate priors a couple of times when I am learning bayesian. I am wondering why this theorem is such ...

Consider the following definition: A family $\cal F$ of probability distributions on $\Theta$ is said to be conjugate (or closed under sampling) for a likelihood function $f(x|\theta)$ if for every $\...

From my understanding, to calculate the posterior probability of a sample $x$ belonging to a class $k$ using Linear Discriminant Analysis you would first calculate the eigenvector matrix $W$ required ...

I work in deep learning research and I am trying to learn how to use variational inference in order to approximate a posterior over the learned weights. I have looked extensively at Yarin Gal's ...

I'm struggling to articulate my question: Introduction Work Orders (WOs) are instructions to a technician to perform specific maintenance actions on a specific piece of machinery. Each work order ...

Given a density $f(x)$ (e.g. the log-normal distribution or log-$t_{\nu=3}$ distribution), I was wondering what algorithm are known/typically used to find a mixture of distributions $g_r(x)$ from ...

My knowledge of probability is basic, and I understand the point of Bayesian interpretation most roughly. The following is part of this paper. It is about how p can be rational for person 1 and not-p ...

What conjugate priors do we have for the model(multivariate) $x_{t+1}=Ax_{t}+\eta_t$, where $\eta_t\overset{iid}\sim N(0,\Sigma)$? I was thinking of using $\tilde{x}=Diag[x_1,...,x_{n-1}]$, $\tilde{y}...

Assume there are 400 athletes in a training camp, who are required to attend the morning drill starting at 4 am. The attendance in morning drills is 70%, i.e. on an average, 280 athletes are present. ...

I'm thinking about univariate density estimation. Original Question In parametric inference, you assume the data are generated from a density that can be summarized by finitely-many parameters. You ...

While dealing with anomaly detection using a probabilistic model I need to compute the probability of an example coming out of the model I built. More specifically: If $p(X)$ is the model I built and ...

I'm using pyMC3 to do Bayesian estimation supersedes the t test (BEST) and I was wondering how to actually interpret this result. I see both groups have significantly different stds because the bar ...

I'm trying to learn bayesian structural time series analysis. For a variety of reasons I need to use Python (mostly pymc3) not R so please do not suggest the ...

The title pretty much says it all. I wonder whether there is any difference in the way Bayesians understand sufficiency vs. the way orthodox statistics understands sufficiency, or are they equivalent? ...

I know I have seen some research, perhaps in the contexts of time-varying topic models, on the popularity of Bayesian methods in statistics and machine learning over the last 20 years. Unfortunately ...

Fisher wrote: "the theory of inverse probability is founded upon an error, and must be wholly rejected" I wonder what was Fisher's reasoning and what error he means in particular. The quote is ...

According to the table of conjugate distributions on Wikipedia, the hypergeometric distribution has as conjugate prior a beta-binomial distribution, where the parameter of interest is "$M$, the number ...

It is considered the ideal case in which the probability structure underlying the categories is known perfectly. Why is that with Bayes classifier we achieve the best performance that can be achieved ...

As I understand it, deep neural networks are performing "representation learning" by layering features together. This allows learning very high dimensional structures in the features. Of course, it's ...

I have a (probably) biased coin. I want to make a Bayesian inference on $\lambda=p(coin=\text{heads})$. I define my prior $p(\lambda)\sim Beta(\lambda; a=1,b=1)$. I toss the coin and update my ...

Let $X_n$ be a binomial distribution with parameter $\theta$. Empirically, after $n$ throws, my estimate is $\hat{\theta}_n=\frac{S_n}{S_n+F_n}$, where $S/F$ are the successes and failures $(F_n:=n-...

I have a very noisy/multimodal likelihood function for a 6-parameter model. The popular emcee sampler fails miserably (no matter how many chains I use and for how ...

(This question is inspired by this comment from Xi'an.) It is well known that if the prior distribution $\pi(\theta)$ is proper and the likelihood $L(\theta | x)$ is well-defined, then the posterior ...

I am using the twang package in R to balance two groups by creating propensity scores, which are then used as weights in the svyglm for a weighted regression of the two groups. I would like however ...

I have estimated a Weibull regression model in JAGS using rjags and R2JAGS. The estimated posterior predictive p-values using the step() function confuse me. They make sense (comparing them to lower ...

I am new to Pymc3 and currently trying to do an parameter estimation with it. I have a set of data which is assumed as a mixture of unform distribution and Von-mises. I found that the available ...

What is the relation between the effective sample size $n$ and the model dimension (the effective size of parameters) $p$ in Bayesian model selection? Or is there any articles talking about this? I ...

Context: I have a psychology experiment with a 2 x 2 design (with Condition (label, no label) and ContrastType (head, tail) as ...

I was reading how to use collapsed gibbs sampling for latent dirichlet allocation in a google group and one user talked about using dirichlet priors with small hyperparameters and sum out the z ...

I've recently used the package BayesFactor in R with the default priors scale r. I have been advised to adjust the Cauchy width based on some pilot data rather than ...

I am quite confused about the objective function of the bayesian model averaging in the paper "Bayesian Averaging of Classifiers and the overfitting Problem".1 On the section 2, here is the first ...

Preface I must say I am aware of previous discussions (e.g. this one) and also of this excellent, didactic proof using Fubini's theorem as presented by Jared Niemi [I'm not saying Jared Niemi is the ...

Let's say I have a list of 100 phone numbers. I call them all. Nobody picks up for 70. I get someone on the line for 30. Of those, 10 are wrong numbers. What can I conclude about the distribution of ...

In a textbook Probability Theory: The Logic of Science written by E. T. Jaynes and others, on page 13 it reads that: For many years, there has been controversy over ‘frequentist’ versus ‘...

Oversampling or SMOTE is useful when the data is imbalanced. Here is the question I cant find the answer: Since we are dealing with probabilities in Bayesian Networks (probabilistic graphical model), ...

Related tags

Hot questions

Language

Popular Tags