首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10615篇
  免费   285篇
  国内免费   85篇
管理学   411篇
民族学   33篇
人才学   5篇
人口学   75篇
丛书文集   697篇
理论方法论   150篇
综合类   5790篇
社会学   170篇
统计学   3654篇
  2024年   16篇
  2023年   48篇
  2022年   56篇
  2021年   70篇
  2020年   135篇
  2019年   199篇
  2018年   245篇
  2017年   359篇
  2016年   251篇
  2015年   287篇
  2014年   451篇
  2013年   1556篇
  2012年   787篇
  2011年   550篇
  2010年   505篇
  2009年   510篇
  2008年   567篇
  2007年   607篇
  2006年   603篇
  2005年   534篇
  2004年   476篇
  2003年   413篇
  2002年   340篇
  2001年   349篇
  2000年   245篇
  1999年   128篇
  1998年   107篇
  1997年   99篇
  1996年   65篇
  1995年   75篇
  1994年   64篇
  1993年   53篇
  1992年   42篇
  1991年   30篇
  1990年   30篇
  1989年   37篇
  1988年   28篇
  1987年   18篇
  1986年   9篇
  1985年   6篇
  1984年   7篇
  1983年   8篇
  1982年   4篇
  1981年   2篇
  1980年   5篇
  1979年   2篇
  1978年   2篇
  1977年   2篇
  1976年   1篇
  1975年   2篇
排序方式: 共有10000条查询结果,搜索用时 62 毫秒
191.
When sampling from a continuous population (or distribution), we often want a rather small sample due to some cost attached to processing the sample or to collecting information in the field. Moreover, a probability sample that allows for design‐based statistical inference is often desired. Given these requirements, we want to reduce the sampling variance of the Horvitz–Thompson estimator as much as possible. To achieve this, we introduce different approaches to using the local pivotal method for selecting well‐spread samples from multidimensional continuous populations. The results of a simulation study clearly indicate that we succeed in selecting spatially balanced samples and improve the efficiency of the Horvitz–Thompson estimator.  相似文献   
192.
This study develops a robust automatic algorithm for clustering probability density functions based on the previous research. Unlike other existing methods that often pre-determine the number of clusters, this method can self-organize data groups based on the original data structure. The proposed clustering method is also robust in regards to noise. Three examples of synthetic data and a real-world COREL dataset are utilized to illustrate the accurateness and effectiveness of the proposed approach.  相似文献   
193.
The Hodrick–Prescott (HP) filtering is frequently used in macroeconometrics to decompose time series, such as real gross domestic product, into their trend and cyclical components. Because the HP filtering is a basic econometric tool, it is necessary to have a precise understanding of the nature of it. This article contributes to the literature by listing several (penalized) least-squares problems that are related to the HP filtering, three of which are newly introduced in the article, and showing their properties. We also remark on their generalization.  相似文献   
194.
It is known that the normal approximation is applicable for sums of non negative random variables, W, with the commonly employed couplings. In this work, we use the Stein’s method to obtain a general theorem of non uniform exponential bound on normal approximation base on monotone size bias couplings of W. Applications of the main result to give the bound on normal approximation for binomial random variable, the number of bulbs on at the terminal time in the lightbulb process, and the number of m runs are also provided.  相似文献   
195.
For survival endpoints in subgroup selection, a score conversion model is often used to convert the set of biomarkers for each patient into a univariate score and using the median of the univariate scores to divide the patients into biomarker‐positive and biomarker‐negative subgroups. However, this may lead to bias in patient subgroup identification regarding the 2 issues: (1) treatment is equally effective for all patients and/or there is no subgroup difference; (2) the median value of the univariate scores as a cutoff may be inappropriate if the sizes of the 2 subgroups are differ substantially. We utilize a univariate composite score method to convert the set of patient's candidate biomarkers to a univariate response score. We propose applying the likelihood ratio test (LRT) to assess homogeneity of the sampled patients to address the first issue. In the context of identification of the subgroup of responders in adaptive design to demonstrate improvement of treatment efficacy (adaptive power), we suggest that subgroup selection is carried out if the LRT is significant. For the second issue, we utilize a likelihood‐based change‐point algorithm to find an optimal cutoff. Our simulation study shows that type I error generally is controlled, while the overall adaptive power to detect treatment effects sacrifices approximately 4.5% for the simulation designs considered by performing the LRT; furthermore, the change‐point algorithm outperforms the median cutoff considerably when the subgroup sizes differ substantially.  相似文献   
196.
Response‐adaptive randomisation (RAR) can considerably improve the chances of a successful treatment outcome for patients in a clinical trial by skewing the allocation probability towards better performing treatments as data accumulates. There is considerable interest in using RAR designs in drug development for rare diseases, where traditional designs are not either feasible or ethically questionable. In this paper, we discuss and address a major criticism levelled at RAR: namely, type I error inflation due to an unknown time trend over the course of the trial. The most common cause of this phenomenon is changes in the characteristics of recruited patients—referred to as patient drift. This is a realistic concern for clinical trials in rare diseases due to their lengthly accrual rate. We compute the type I error inflation as a function of the time trend magnitude to determine in which contexts the problem is most exacerbated. We then assess the ability of different correction methods to preserve type I error in these contexts and their performance in terms of other operating characteristics, including patient benefit and power. We make recommendations as to which correction methods are most suitable in the rare disease context for several RAR rules, differentiating between the 2‐armed and the multi‐armed case. We further propose a RAR design for multi‐armed clinical trials, which is computationally efficient and robust to several time trends considered.  相似文献   
197.
A stable money demand function is essential when using monetary aggregate as a monetary policy. Thus, there is need to examine the stability of the money demand function in Nigeria after the deregulation of the financial sector. To achieve this, the study employed CUSUM (cumulative sum) and CUSUMSQ (CUSUM of square) tests after using autoregressive distributive lag bounds test to determine the existence of a long run relationship between monetary aggregates and their determinants. Results of the study show that a long-run relationship holds and that the demand for money is stable in Nigeria. In addition, the inflation rate is found to be a better proxy for an opportunity variable when compared to interest rate. The main implication of the study is that interest rate is ineffective as a monetary policy instrument in Nigeria.  相似文献   
198.
Seasonal fractional ARIMA (ARFISMA) model with infinite variance innovations is used in the analysis of seasonal long-memory time series with large fluctuations (heavy-tailed distributions). Two methods, which are the empirical characteristic function (ECF) procedure developed by Knight and Yu [The empirical characteristic function in time series estimation. Econometric Theory. 2002;18:691–721] and the Two-Step method (TSM) are proposed to estimate the parameters of stable ARFISMA model. The ECF method estimates simultaneously all the parameters, while the TSM considers in the first step the Markov Chains Monte Carlo–Whittle approach introduced by Ndongo et al. [Estimation of long-memory parameters for seasonal fractional ARIMA with stable innovations. Stat Methodol. 2010;7:141–151], combined with the maximum likelihood estimation method developed by Alvarez and Olivares [Méthodes d'estimation pour des lois stables avec des applications en finance. Journal de la Société Française de Statistique. 2005;1(4):23–54] in the second step. Monte Carlo simulations are also used to evaluate the finite sample performance of these estimation techniques.  相似文献   
199.
This paper presents some powerful omnibus tests for multivariate normality based on the likelihood ratio and the characterizations of the multivariate normal distribution. The power of the proposed tests is studied against various alternatives via Monte Carlo simulations. Simulation studies show our tests compare well with other powerful tests including multivariate versions of the Shapiro–Wilk test and the Anderson–Darling test.  相似文献   
200.
卫星导航定位是测绘工程中发展较快、应用较广的一项热门技术,“GPS测量与数据处理”课程也一直是测绘专业本科教学的主干课程,具有更新快、理论深、实践性强等特点,对教师授课及学生学习都具有较高的要求。文章对东华理工大学测绘工程专业GPS课程的授课内容、授课方式、实践教学及考核模式等方面存在的问题进行了分析,并提出了相应的教学改革建议。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号