首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11249篇
  免费   21篇
  国内免费   1篇
管理学   1569篇
民族学   116篇
人口学   2525篇
丛书文集   8篇
理论方法论   545篇
综合类   318篇
社会学   4900篇
统计学   1290篇
  2024年   2篇
  2023年   9篇
  2022年   14篇
  2021年   21篇
  2020年   37篇
  2019年   59篇
  2018年   1708篇
  2017年   1707篇
  2016年   1127篇
  2015年   71篇
  2014年   77篇
  2013年   180篇
  2012年   370篇
  2011年   1168篇
  2010年   1063篇
  2009年   811篇
  2008年   838篇
  2007年   1006篇
  2006年   18篇
  2005年   238篇
  2004年   269篇
  2003年   225篇
  2002年   93篇
  2001年   14篇
  2000年   22篇
  1999年   9篇
  1998年   4篇
  1997年   4篇
  1996年   33篇
  1995年   7篇
  1994年   4篇
  1993年   5篇
  1992年   8篇
  1991年   5篇
  1990年   6篇
  1989年   3篇
  1988年   8篇
  1987年   3篇
  1986年   2篇
  1983年   2篇
  1982年   2篇
  1980年   4篇
  1979年   2篇
  1978年   2篇
  1977年   3篇
  1976年   1篇
  1975年   1篇
  1972年   1篇
  1969年   1篇
  1967年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
951.
The prevalence of interval censored data is increasing in medical studies due to the growing use of biomarkers to define a disease progression endpoint. Interval censoring results from periodic monitoring of the progression status. For example, disease progression is established in the interval between the clinic visit where progression is recorded and the prior clinic visit where there was no evidence of disease progression. A methodology is proposed for estimation and inference on the regression coefficients in the Cox proportional hazards model with interval censored data. The methodology is based on estimating equations and uses an inverse probability weight to select event time pairs where the ordering is unambiguous. Simulations are performed to examine the finite sample properties of the estimate and a colon cancer data set is used to demonstrate its performance relative to the conventional partial likelihood estimate that ignores the interval censoring.  相似文献   
952.
In many randomized clinical trials, the primary response variable, for example, the survival time, is not observed directly after the patients enroll in the study but rather observed after some period of time (lag time). It is often the case that such a response variable is missing for some patients due to censoring that occurs when the study ends before the patient’s response is observed or when the patients drop out of the study. It is often assumed that censoring occurs at random which is referred to as noninformative censoring; however, in many cases such an assumption may not be reasonable. If the missing data are not analyzed properly, the estimator or test for the treatment effect may be biased. In this paper, we use semiparametric theory to derive a class of consistent and asymptotically normal estimators for the treatment effect parameter which are applicable when the response variable is right censored. The baseline auxiliary covariates and post-treatment auxiliary covariates, which may be time-dependent, are also considered in our semiparametric model. These auxiliary covariates are used to derive estimators that both account for informative censoring and are more efficient then the estimators which do not consider the auxiliary covariates.  相似文献   
953.
In sample surveys and many other areas of application, the ratio of variables is often of great importance. This often occurs when one variable is available at the population level while another variable of interest is available for sample data only. In this case, using the sample ratio, we can often gather valuable information on the variable of interest for the unsampled observations. In many other studies, the ratio itself is of interest, for example when estimating proportions from a random number of observations. In this note we compare three confidence intervals for the population ratio: A large sample interval, a log based version of the large sample interval, and Fieller’s interval. This is done through data analysis and through a small simulation experiment. The Fieller method has often been proposed as a superior interval for small sample sizes. We show through a data example and simulation experiments that Fieller’s method often gives nonsensical and uninformative intervals when the observations are noisy relative to the mean of the data. The large sample interval does not similarly suffer and thus can be a more reliable method for small and large samples.  相似文献   
954.
A prevalence of heavy-tailed, peaked and skewed uncertainty phenomena have been cited in literature dealing with economic, physics, and engineering data. This fact has invigorated the search for continuous distributions of this nature. In this paper we shall generalize the two-sided framework presented in Kotz and van Dorp (Beyond beta: other continuous families of distributions with bounded support and applications. World Scientific Press, Singapore, 2004) for the construction of families of distributions with bounded support via a mixture technique utilizing two generating densities instead of one. The family of Elevated Two-Sided Power (ETSP) distributions is studied as an instance of this generalized framework. Through a moment ratio diagram comparison, we demonstrate that the ETSP family allows for a remarkable flexibility when modeling heavy-tailed and peaked, but skewed, uncertainty phenomena. We shall demonstrate its applicability via an illustrative example utilizing 2008 US income data.  相似文献   
955.
This paper extends the scedasticity comparison among several groups of observations, usually complying with the homoscedastic and the heteroscedastic cases, in order to deal with data sets laying in an intermediate situation. As is well known, homoscedasticity corresponds to equality in orientation, shape and size of the group scatters. Here our attention is focused on two weaker requirements: scatters with the same orientation, but with different shape and size, or scatters with the same shape and size but different orientation. We introduce a multiple testing procedure that takes into account each of the above conditions. This approach discloses a richer information on the data underlying structure than the classical method only based on homo/heteroscedasticity. At the same time, it allows a more parsimonious parametrization, whenever the patterned model is appropriate to describe the real data. The new inferential methodology is then applied to some well-known data sets, chosen in the multivariate literature, to show the real gain in using this more informative approach. Finally, a wide simulation study illustrates and compares the performance of the proposal using data sets with gradual departure from homoscedasticity.  相似文献   
956.
In empirical Bayes inference one is typically interested in sampling from the posterior distribution of a parameter with a hyper-parameter set to its maximum likelihood estimate. This is often problematic particularly when the likelihood function of the hyper-parameter is not available in closed form and the posterior distribution is intractable. Previous works have dealt with this problem using a multi-step approach based on the EM algorithm and Markov Chain Monte Carlo (MCMC). We propose a framework based on recent developments in adaptive MCMC, where this problem is addressed more efficiently using a single Monte Carlo run. We discuss the convergence of the algorithm and its connection with the EM algorithm. We apply our algorithm to the Bayesian Lasso of Park and Casella (J. Am. Stat. Assoc. 103:681–686, 2008) and on the empirical Bayes variable selection of George and Foster (J. Am. Stat. Assoc. 87:731–747, 2000).  相似文献   
957.
This paper proposes a functional connectivity approach, inspired by brain imaging literature, to model cross-sectional dependence. Using a varying parameter framework, the model allows correlation patterns to arise from complex economic or social relations rather than being simply functions of economic or geographic distances between locations. It nests the conventional spatial and factor model approaches as special cases. A Bayesian Markov Chain Monte Carlo method implements this approach. A small scale Monte Carlo study is conducted to evaluate the performance of this approach in finite samples, which outperforms both a spatial model and a factor model. We apply the functional connectivity approach to estimate a hedonic housing price model for Paris using housing transactions over the period 1990–2003. It allows us to get more information about complex spatial connections and appears more suitable to capture the cross-sectional dependence than the conventional methods.  相似文献   
958.
The control and treatment of dyslipidemia is a major public health challenge, particularly for patients with coronary heart diseases. In this paper we propose a framework for survival analysis of patients who had a major cardiac event, focusing on assessment of the effect of changing LDL-cholesterol level and statins consumption on survival. This framework includes a Cox PH model and a Markov chain, and combines their results into reinforced conclusions regarding the factors that affect survival time. We prospectively studied 2,277 cardiac patients, and the results show high congruence between the Markov model and the PH model; both evidence that diabetes, history of stroke, peripheral vascular disease and smoking significantly increase hazard rate and reduce survival time. On the other hand, statin consumption is correlated with a lower hazard rate and longer survival time in both models. The role of such a framework in understanding the therapeutic behavior of patients and implementing effective secondary and primary prevention of heart diseases is discussed here.  相似文献   
959.
In this paper we introduce a new extension for the Birnbaum–Saunder distribution based on the family of the epsilon-skew-symmetric distributions studied in Arellano-Valle et al. (J Stat Plan Inference 128(2):427–443, 2005). The extension allows generating Birnbaun–Saunders type distributions able to deal with extreme or outlying observations (Dupuis and Mills, IEEE Trans Reliab 47:88–95, 1998). Basic properties such as moments and Fisher information matrix are also studied. Results of a real data application are reported illustrating good fitting properties of the proposed model.  相似文献   
960.
A doubly censoring scheme occurs when the lifetimes T being measured, from a well-known time origin, are exactly observed within a window [L, R] of observational time and are otherwise censored either from above (right-censored observations) or below (left-censored observations). Sample data consists on the pairs (U, δ) where U = min{R, max{T, L}} and δ indicates whether T is exactly observed (δ = 0), right-censored (δ = 1) or left-censored (δ = −1). We are interested in the estimation of the marginal behaviour of the three random variables T, L and R based on the observed pairs (U, δ). We propose new nonparametric simultaneous marginal estimators [^(S)]T, [^(S)]L{\hat S_{T}, \hat S_{L}} and [^(S)]R{\hat S_{R}} for the survival functions of T, L and R, respectively, by means of an inverse-probability-of-censoring approach. The proposed estimators [^(S)]T, [^(S)]L{\hat S_{T}, \hat S_{L}} and [^(S)]R{\hat S_{R}} are not computationally intensive, generalize the empirical survival estimator and reduce to the Kaplan-Meier estimator in the absence of left-censored data. Furthermore, [^(S)]T{\hat S_{T}} is equivalent to a self-consistent estimator, is uniformly strongly consistent and asymptotically normal. The method is illustrated with data from a cohort of drug users recruited in a detoxification program in Badalona (Spain). For these data we estimate the survival function for the elapsed time from starting IV-drugs to AIDS diagnosis, as well as the potential follow-up time. A simulation study is discussed to assess the performance of the three survival estimators for moderate sample sizes and different censoring levels.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号