首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2282篇
  免费   60篇
  国内免费   12篇
管理学   184篇
民族学   6篇
人口学   16篇
丛书文集   75篇
理论方法论   45篇
综合类   622篇
社会学   104篇
统计学   1302篇
  2024年   2篇
  2023年   10篇
  2022年   23篇
  2021年   25篇
  2020年   43篇
  2019年   69篇
  2018年   62篇
  2017年   90篇
  2016年   76篇
  2015年   73篇
  2014年   85篇
  2013年   591篇
  2012年   213篇
  2011年   95篇
  2010年   82篇
  2009年   75篇
  2008年   84篇
  2007年   84篇
  2006年   84篇
  2005年   62篇
  2004年   68篇
  2003年   51篇
  2002年   50篇
  2001年   44篇
  2000年   40篇
  1999年   23篇
  1998年   19篇
  1997年   21篇
  1996年   17篇
  1995年   13篇
  1994年   13篇
  1993年   6篇
  1992年   6篇
  1991年   4篇
  1990年   9篇
  1989年   4篇
  1988年   2篇
  1987年   3篇
  1986年   2篇
  1985年   4篇
  1984年   6篇
  1983年   7篇
  1982年   4篇
  1981年   1篇
  1980年   3篇
  1979年   2篇
  1978年   1篇
  1977年   2篇
  1975年   1篇
排序方式: 共有2354条查询结果,搜索用时 31 毫秒
111.
We focus on the construction of confidence corridors for multivariate nonparametric generalized quantile regression functions. This construction is based on asymptotic results for the maximal deviation between a suitable nonparametric estimator and the true function of interest, which follow after a series of approximation steps including a Bahadur representation, a new strong approximation theorem, and exponential tail inequalities for Gaussian random fields. As a byproduct we also obtain multivariate confidence corridors for the regression function in the classical mean regression. To deal with the problem of slowly decreasing error in coverage probability of the asymptotic confidence corridors, which results in meager coverage for small sample sizes, a simple bootstrap procedure is designed based on the leading term of the Bahadur representation. The finite-sample properties of both procedures are investigated by means of a simulation study and it is demonstrated that the bootstrap procedure considerably outperforms the asymptotic bands in terms of coverage accuracy. Finally, the bootstrap confidence corridors are used to study the efficacy of the National Supported Work Demonstration, which is a randomized employment enhancement program launched in the 1970s. This article has supplementary materials online.  相似文献   
112.
In this article, we propose a weighted simulated integrated conditional moment (WSICM) test of the validity of parametric specifications of conditional distribution models for stationary time series data, by combining the weighted integrated conditional moment (ICM) test of Bierens (1984 Bierens, H. J. (1984). Model specification testing of time series regressions. Journal of Econometrics 26:323353.[Crossref], [Web of Science ®] [Google Scholar]) for time series regression models with the simulated ICM test of Bierens and Wang (2012 Bierens, H. J., Wang, L. (2012). Integrated conditional moment tests for parametric conditional distributions. Econometric Theory 28:328362.[Crossref], [Web of Science ®] [Google Scholar]) of conditional distribution models for cross-section data. To the best of our knowledge, no other consistent test for parametric conditional time series distributions has been proposed yet in the literature, despite consistency claims made by some authors.  相似文献   
113.
ABSTRACT

This paper develops tests of the null hypothesis of linearity in the context of autoregressive models with Markov-switching means and variances. These tests are robust to the identification failures that plague conventional likelihood-based inference methods. The approach exploits the moments of normal mixtures implied by the regime-switching process and uses Monte Carlo test techniques to deal with the presence of an autoregressive component in the model specification. The proposed tests have very respectable power in comparison with the optimal tests for Markov-switching parameters of Carrasco et al. (2014 Carrasco, M., Hu, L., Ploberger, W. (2014). Optimal test for Markov switching parameters. Econometrica 82(2):765784.[Crossref], [Web of Science ®] [Google Scholar]), and they are also quite attractive owing to their computational simplicity. The new tests are illustrated with an empirical application to an autoregressive model of USA output growth.  相似文献   
114.
For ethical reasons, group sequential trials were introduced to allow trials to stop early in the event of extreme results. Endpoints in such trials are usually mortality or irreversible morbidity. For a given endpoint, the norm is to use a single test statistic and to use that same statistic for each analysis. This approach is risky because the test statistic has to be specified before the study is unblinded, and there is loss in power if the assumptions that ensure optimality for each analysis are not met. To minimize the risk of moderate to substantial loss in power due to a suboptimal choice of a statistic, a robust method was developed for nonsequential trials. The concept is analogous to diversification of financial investments to minimize risk. The method is based on combining P values from multiple test statistics for formal inference while controlling the type I error rate at its designated value.This article evaluates the performance of 2 P value combining methods for group sequential trials. The emphasis is on time to event trials although results from less complex trials are also included. The gain or loss in power with the combination method relative to a single statistic is asymmetric in its favor. Depending on the power of each individual test, the combination method can give more power than any single test or give power that is closer to the test with the most power. The versatility of the method is that it can combine P values from different test statistics for analysis at different times. The robustness of results suggests that inference from group sequential trials can be strengthened with the use of combined tests.  相似文献   
115.
Multiphase experiments are introduced and an overview of their design and analysis as it is currently practised is given via an account of their development since 1955 and a literature survey. Methods that are available for designing and analysing them are outlined, with an emphasis on making explicit the role of the model in their design. The availability of software and its use is described in detail. Overall, while multiphase designs have been applied in areas such as plant breeding, plant pathology, greenhouse experimentation, product storage, gene expression studies, and sensory evaluation, their deployment has been limited.  相似文献   
116.
The multiple non symmetric correspondence analysis (MNSCA) is a useful technique for analyzing a two-way contingency table. In more complex cases, the predictor variables are more than one. In this paper, the MNSCA, along with the decomposition of the Gray–Williams Tau index, in main effects and interaction term, is used to analyze a contingency table with two predictor categorical variables and an ordinal response variable. The Multiple-Tau index is a measure of association that contains both main effects and interaction term. The main effects represent the change in the response variables due to the change in the level/categories of the predictor variables, considering the effects of their addition, while the interaction effect represents the combined effect of predictor categorical variables on the ordinal response variable. Moreover, for ordinal scale variables, we propose a further decomposition in order to check the existence of power components by using Emerson's orthogonal polynomials.  相似文献   
117.
Copulas have proved to be very successful tools for the flexible modeling of dependence. Bivariate copulas have been deeply researched in recent years, while building higher-dimensional copulas is still recognized to be a difficult task. In this paper, we study the higher-dimensional dependent reliability systems using a type of decomposition called “vine,” by which a multivariate distribution can be decomposed into a cascade of bivariate copulas. Some equations of system reliability for parallel, series, and k-out-of-n systems are obtained and then decomposed based on C-vine and D-vine copulas. Finally, a shutdown system is considered to illustrate the results obtained in the paper.  相似文献   
118.
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine, machine learning, and credit scoring. The receiver operating characteristic (ROC) surface is a useful tool to assess the ability of a diagnostic test to discriminate among three-ordered classes or groups. In this article, nonparametric predictive inference (NPI) for three-group ROC analysis for ordinal outcomes is presented. NPI is a frequentist statistical method that is explicitly aimed at using few modeling assumptions, enabled through the use of lower and upper probabilities to quantify uncertainty. This article also includes results on the volumes under the ROC surfaces and consideration of the choice of decision thresholds for the diagnosis. Two examples are provided to illustrate our method.  相似文献   
119.
In clinical trials, missing data commonly arise through nonadherence to the randomized treatment or to study procedure. For trials in which recurrent event endpoints are of interests, conventional analyses using the proportional intensity model or the count model assume that the data are missing at random, which cannot be tested using the observed data alone. Thus, sensitivity analyses are recommended. We implement the control‐based multiple imputation as sensitivity analyses for the recurrent event data. We model the recurrent event using a piecewise exponential proportional intensity model with frailty and sample the parameters from the posterior distribution. We impute the number of events after dropped out and correct the variance estimation using a bootstrap procedure. We apply the method to an application of sitagliptin study.  相似文献   
120.
Various statistical tests have been developed for testing the equality of means in matched pairs with missing values. However, most existing methods are commonly based on certain distributional assumptions such as normality, 0-symmetry or homoscedasticity of the data. The aim of this paper is to develop a statistical test that is robust against deviations from such assumptions and also leads to valid inference in case of heteroscedasticity or skewed distributions. This is achieved by applying a clever randomization approach to handle missing data. The resulting test procedure is not only shown to be asymptotically correct but is also finitely exact if the distribution of the data is invariant with respect to the considered randomization group. Its small sample performance is further studied in an extensive simulation study and compared to existing methods. Finally, an illustrative data example is analysed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号