首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4024篇
  免费   635篇
管理学   1097篇
民族学   7篇
人口学   46篇
丛书文集   6篇
理论方法论   839篇
综合类   35篇
社会学   1629篇
统计学   1000篇
  2023年   3篇
  2021年   93篇
  2020年   160篇
  2019年   338篇
  2018年   205篇
  2017年   353篇
  2016年   348篇
  2015年   337篇
  2014年   356篇
  2013年   547篇
  2012年   361篇
  2011年   255篇
  2010年   259篇
  2009年   150篇
  2008年   185篇
  2007年   96篇
  2006年   103篇
  2005年   95篇
  2004年   105篇
  2003年   77篇
  2002年   80篇
  2001年   83篇
  2000年   67篇
  1999年   3篇
排序方式: 共有4659条查询结果,搜索用时 20 毫秒
41.
Abstract

Under non‐additive probabilities, cluster points of the empirical average have been proved to quasi-surely fall into the interval constructed by either the lower and upper expectations or the lower and upper Choquet expectations. In this paper, based on the initiated notion of independence, we obtain a different Marcinkiewicz-Zygmund type strong law of large numbers. Then the Kolmogorov type strong law of large numbers can be derived from it directly, stating that the closed interval between the lower and upper expectations is the smallest one that covers cluster points of the empirical average quasi-surely.  相似文献   
42.
Single cohort stage‐frequency data are considered when assessing the stage reached by individuals through destructive sampling. For this type of data, when all hazard rates are assumed constant and equal, Laplace transform methods have been applied in the past to estimate the parameters in each stage‐duration distribution and the overall hazard rates. If hazard rates are not all equal, estimating stage‐duration parameters using Laplace transform methods becomes complex. In this paper, two new models are proposed to estimate stage‐dependent maturation parameters using Laplace transform methods where non‐trivial hazard rates apply. The first model encompasses hazard rates that are constant within each stage but vary between stages. The second model encompasses time‐dependent hazard rates within stages. Moreover, this paper introduces a method for estimating the hazard rate in each stage for the stage‐wise constant hazard rates model. This work presents methods that could be used in specific types of laboratory studies, but the main motivation is to explore the relationships between stage maturation parameters that, in future work, could be exploited in applying Bayesian approaches. The application of the methodology in each model is evaluated using simulated data in order to illustrate the structure of these models.  相似文献   
43.
This paper deals with a longitudinal semi‐parametric regression model in a generalised linear model setup for repeated count data collected from a large number of independent individuals. To accommodate the longitudinal correlations, we consider a dynamic model for repeated counts which has decaying auto‐correlations as the time lag increases between the repeated responses. The semi‐parametric regression function involved in the model contains a specified regression function in some suitable time‐dependent covariates and a non‐parametric function in some other time‐dependent covariates. As far as the inference is concerned, because the non‐parametric function is of secondary interest, we estimate this function consistently using the independence assumption‐based well‐known quasi‐likelihood approach. Next, the proposed longitudinal correlation structure and the estimate of the non‐parametric function are used to develop a semi‐parametric generalised quasi‐likelihood approach for consistent and efficient estimation of the regression effects in the parametric regression function. The finite sample performance of the proposed estimation approach is examined through an intensive simulation study based on both large and small samples. Both balanced and unbalanced cluster sizes are incorporated in the simulation study. The asymptotic performances of the estimators are given. The estimation methodology is illustrated by reanalysing the well‐known health care utilisation data consisting of counts of yearly visits to a physician by 180 individuals for four years and several important primary and secondary covariates.  相似文献   
44.
We update a previous approach to the estimation of the size of an open population when there are multiple lists at each time point. Our motivation is 35 years of longitudinal data on the detection of drug users by the Central Registry of Drug Abuse in Hong Kong. We develop a two‐stage smoothing spline approach. This gives a flexible and easily implemented alternative to the previous method which was based on kernel smoothing. The new method retains the property of reducing the variability of the individual estimates at each time point. We evaluate the new method by means of a simulation study that includes an examination of the effects of variable selection. The new method is then applied to data collected by the Central Registry of Drug Abuse. The parameter estimates obtained are compared with the well known Jolly–Seber estimates based on single capture methods.  相似文献   
45.
Clinical trials are often designed to compare continuous non‐normal outcomes. The conventional statistical method for such a comparison is a non‐parametric Mann–Whitney test, which provides a P‐value for testing the hypothesis that the distributions of both treatment groups are identical, but does not provide a simple and straightforward estimate of treatment effect. For that, Hodges and Lehmann proposed estimating the shift parameter between two populations and its confidence interval (CI). However, such a shift parameter does not have a straightforward interpretation, and its CI contains zero in some cases when Mann–Whitney test produces a significant result. To overcome the aforementioned problems, we introduce the use of the win ratio for analysing such data. Patients in the new and control treatment are formed into all possible pairs. For each pair, the new treatment patient is labelled a ‘winner’ or a ‘loser’ if it is known who had the more favourable outcome. The win ratio is the total number of winners divided by the total numbers of losers. A 95% CI for the win ratio can be obtained using the bootstrap method. Statistical properties of the win ratio statistic are investigated using two real trial data sets and six simulation studies. Results show that the win ratio method has about the same power as the Mann–Whitney method. We recommend the use of the win ratio method for estimating the treatment effect (and CI) and the Mann–Whitney method for calculating the P‐value for comparing continuous non‐Normal outcomes when the amount of tied pairs is small. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
46.
We consider the blinded sample size re‐estimation based on the simple one‐sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two‐sample t‐test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re‐estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non‐inferiority margin for non‐inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
47.
Pseudo‐values have proven very useful in censored data analysis in complex settings such as multi‐state models. It was originally suggested by Andersen et al., Biometrika, 90, 2003, 335 who also suggested to estimate standard errors using classical generalized estimating equation results. These results were studied more formally in Graw et al., Lifetime Data Anal., 15, 2009, 241 that derived some key results based on a second‐order von Mises expansion. However, results concerning large sample properties of estimates based on regression models for pseudo‐values still seem unclear. In this paper, we study these large sample properties in the simple setting of survival probabilities and show that the estimating function can be written as a U‐statistic of second order giving rise to an additional term that does not vanish asymptotically. We further show that previously advocated standard error estimates will typically be too large, although in many practical applications the difference will be of minor importance. We show how to estimate correctly the variability of the estimator. This is further studied in some simulation studies.  相似文献   
48.
Linear increments (LI) are used to analyse repeated outcome data with missing values. Previously, two LI methods have been proposed, one allowing non‐monotone missingness but not independent measurement error and one allowing independent measurement error but only monotone missingness. In both, it was suggested that the expected increment could depend on current outcome. We show that LI can allow non‐monotone missingness and either independent measurement error of unknown variance or dependence of expected increment on current outcome but not both. A popular alternative to LI is a multivariate normal model ignoring the missingness pattern. This gives consistent estimation when data are normally distributed and missing at random (MAR). We clarify the relation between MAR and the assumptions of LI and show that for continuous outcomes multivariate normal estimators are also consistent under (non‐MAR and non‐normal) assumptions not much stronger than those of LI. Moreover, when missingness is non‐monotone, they are typically more efficient.  相似文献   
49.
A problem of using a non‐convex penalty for sparse regression is that there are multiple local minima of the penalized sum of squared residuals, and it is not known which one is a good estimator. The aim of this paper is to give a guide to design a non‐convex penalty that has the strong oracle property. Here, the strong oracle property means that the oracle estimator is the unique local minimum of the objective function. We summarize three definitions of the oracle property – the global, weak and strong oracle properties. Then, we give sufficient conditions for the weak oracle property, which means that the oracle estimator becomes a local minimum. We give an example of non‐convex penalties that possess the weak oracle property but not the strong oracle property. Finally, we give a necessary condition for the strong oracle property.  相似文献   
50.
The paper proposes a new test for detecting the umbrella pattern under a general non‐parametric scheme. The alternative asserts that the umbrella ordering holds while the hypothesis is its complement. The main focus is put on controlling the power function of the test outside the alternative. As a result, the asymptotic error of the first kind of the constructed solution is smaller than or equal to the fixed significance level α on the whole set where the umbrella ordering does not hold. Also, under finite sample sizes, this error is controlled to a satisfactory extent. A simulation study shows, among other things, that the new test improves upon the solution widely recommended in the literature of the subject. A routine, written in R, is attached as the Supporting Information file.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号