首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1092篇
  免费   88篇
  国内免费   3篇
管理学   210篇
民族学   2篇
人才学   1篇
人口学   42篇
丛书文集   41篇
理论方法论   27篇
综合类   152篇
社会学   132篇
统计学   576篇
  2024年   1篇
  2023年   6篇
  2022年   3篇
  2021年   27篇
  2020年   38篇
  2019年   45篇
  2018年   43篇
  2017年   67篇
  2016年   51篇
  2015年   58篇
  2014年   69篇
  2013年   191篇
  2012年   79篇
  2011年   59篇
  2010年   42篇
  2009年   57篇
  2008年   74篇
  2007年   31篇
  2006年   42篇
  2005年   31篇
  2004年   30篇
  2003年   16篇
  2002年   22篇
  2001年   15篇
  2000年   15篇
  1999年   13篇
  1998年   10篇
  1997年   8篇
  1996年   7篇
  1995年   11篇
  1994年   5篇
  1993年   2篇
  1992年   4篇
  1991年   2篇
  1990年   3篇
  1989年   1篇
  1988年   2篇
  1985年   1篇
  1983年   1篇
  1982年   1篇
排序方式: 共有1183条查询结果,搜索用时 15 毫秒
21.
This paper considers the optimal design problem for multivariate mixed-effects logistic models with longitudinal data. A decomposition method of the binary outcome and the penalized quasi-likelihood are used to obtain the information matrix. The D-optimality criterion based on the approximate information matrix is minimized under different cost constraints. The results show that the autocorrelation coefficient plays a significant role in the design. To overcome the dependence of the D-optimal designs on the unknown fixed-effects parameters, the Bayesian D-optimality criterion is proposed. The relative efficiencies of designs reveal that both the cost ratio and autocorrelation coefficient play an important role in the optimal designs.  相似文献   
22.
A random effects model for analyzing mixed longitudinal count and ordinal data is presented where the count response is inflated in two points (k and l) and an (k,l)-Inflated Power series distribution is used as its distribution. A full likelihood-based approach is used to obtain maximum likelihood estimates of parameters of the model. For data with non-ignorable missing values models with probit model for missing mechanism are used.The dependence between longitudinal sequences of responses and inflation parameters are investigated using a random effects approach. Also, to investigate the correlation between mixed ordinal and count responses of each individuals at each time, a shared random effect is used. In order to assess the performance of the model, a simulation study is performed for a case that the count response has (k,l)-Inflated Binomial distribution. Performance comparisons of count-ordinal random effect model, Zero-Inflated ordinal random effects model and (k,l)-Inflated ordinal random effects model are also given. The model is applied to a real social data set from the first two waves of the national longitudinal study of adolescent to adult health (Add Health study). In this data set, the joint responses are the number of days in a month that each individual smoked as the count response and the general health condition of each individual as the ordinal response. For the count response there is incidence of excess values of 0 and 30.  相似文献   
23.

Item response models are essential tools for analyzing results from many educational and psychological tests. Such models are used to quantify the probability of correct response as a function of unobserved examinee ability and other parameters explaining the difficulty and the discriminatory power of the questions in the test. Some of these models also incorporate a threshold parameter for the probability of the correct response to account for the effect of guessing the correct answer in multiple choice type tests. In this article we consider fitting of such models using the Gibbs sampler. A data augmentation method to analyze a normal-ogive model incorporating a threshold guessing parameter is introduced and compared with a Metropolis-Hastings sampling method. The proposed method is an order of magnitude more efficient than the existing method. Another objective of this paper is to develop Bayesian model choice techniques for model discrimination. A predictive approach based on a variant of the Bayes factor is used and compared with another decision theoretic method which minimizes an expected loss function on the predictive space. A classical model choice technique based on a modified likelihood ratio test statistic is shown as one component of the second criterion. As a consequence the Bayesian methods proposed in this paper are contrasted with the classical approach based on the likelihood ratio test. Several examples are given to illustrate the methods.  相似文献   
24.
Measuring and improving the efficiency of the Chinese commercial banking system has recently attracted increasing interest. Few studies, however, have adopted the two-stage network DEA to explore this issue in the Chinese context. Because the entire operational process of the banking system could be divided into two sub-processes (deposit producing and profit earning), the evaluation of the sub-process efficiencies could be used to assist in identifying the sources of the inefficiency of the entire banking system. In this study, we utilize the network DEA approach to disaggregate, evaluate and test the efficiencies of 16 major Chinese commercial banks during the third round of the Chinese banking reform period (2003–2011) with the variable returns to scale setting and the consideration of undesirable/bad output. The main findings of this study are as follows: (i) the two-stage DEA model is more effective than the conventional black box DEA model in identifying the inefficiency of banking system, and the inefficiency of the Chinese banking system primarily results from the inefficiency of its deposit producing sub-process; (ii) the overall efficiency of the Chinese banking system improves over the study period because of the reform; (iii) the state-owned commercial banks (SOBs) appear to be more overall efficient than the joint-stock commercial banks (JSBs) only in the pre-reform period, and the efficiency difference between the SOBs and the JSBs is reduced over the post-reform period; (iv) the disposal of non-performing loans (NPLs) from the Chinese banking system in general explains its efficiency improvement, and the joint-equity reform of the SOBs specifically increases their efficiencies.  相似文献   
25.

Asymptotic confidence (delta) intervals and intervals based upon the use of Fieller's theorem are alternative methods for constructing intervals for the <$>\gamma<$>% effective doses (ED<$>_\gamma<$>). Sitter and Wu (1993) provided a comparison of the two approaches for the ED<$>_{50}<$>, for the case in which a logistic dose response curve is assumed. They showed that the Fieller intervals are generally superior. In this paper, we introduce two new families of intervals, both of which include the delta and Fieller intervals as special cases. In addition we consider interval estimation of the ED<$>_{90}<$> as well as the ED<$>_{50}<$>. We provide a comparison of the various methods for the problem of constructing a confidence interval for the ED<$>_\gamma<$>.  相似文献   
26.
We consider a problem of evaluating efficiency of Decision Making Units (DMUs) based on their deterministic performance on multiple consumed inputs and multiple produced outputs. We apply a ratio-based efficiency measure, and account for the Decision Maker׳s preference information representable with linear constraints involving input/output weights. We analyze the set of all feasible weights to answer various robustness concerns by deriving: (1) extreme efficiency scores and (2) extreme efficiency ranks for each DMU, (3) possible and necessary efficiency preference relations for pairs of DMUs, (4) efficiency distribution, (5) efficiency rank acceptability indices, and (6) pairwise efficiency outranking indices. The proposed hybrid approach combines and extends previous results from Ratio-based Efficiency Analysis and the SMAA-D method. The practical managerial implications are derived from the complementary character of accounted perspectives on DMUs׳ efficiencies. We present an innovative open-source software implementing an integrated framework for robustness analysis using a ratio-based efficiency model on the diviz platform. The proposed approach is applied to a real-world problem of evaluating efficiency of Polish airports. We consider four inputs related to the capacities of a terminal, runways, and an apron, and to the airport׳s catchment area, and two outputs concerning passenger traffic and number of aircraft movements. We present how the results can be affected by integrating the weight constraints and eliminating outlier DMUs.  相似文献   
27.
Many in the data visualization and evaluation communities recommend conveying the message or takeaway of the visualization in the visualization's title. This study tested that recommendation by examining how informative or generic titles impact a visualization’s visual efficiency, aesthetics, credibility, and the perceived effectiveness of the hypothetical program examined. Furthermore, this study tested how simple or complex graphs, and positive, negative, or mixed results (i.e., valence of the results) affected outcomes. Participants were randomly assigned to one of 12 conditions, representing a 2 (graph: simple or complex) x 2 (title: generic or informative) x 3 (valence: positive, negative, mixed) between-subjects study. The results indicated that informative titles required less mental effort and were viewed as more aesthetically pleasing, but otherwise did not lead to greater accuracy, credibility, or perceived effectiveness. Furthermore, titles did not interact with graph type or the valence of the findings. While the results suggest it is worthwhile to consider adding an informative title to data visualizations as they can reduce mental effort for the viewer, the intended goal of the visualization should be taken into consideration. Considering the goal of the visualization can be a deciding factor of the type of graph and title that will best serve its intended purposes. Overall, this suggests that data visualization recommendations that impact evaluation reporting practices should be scrutinized more closely through research.  相似文献   
28.
Banks occasionally employ frontier efficiency analyses to objectively identify best practices within their organizations. Amongst such methods, Data Envelopment Analysis (DEA) was found to be one of the leading approaches. DEA has been successfully applied in many bank branch performance evaluations using traditional intermediation, profitability and production approaches. However, there has been little focus on assessing the growth potential of individual branches.This research presents five models that examine three perspectives of branch growth. Each model was applied to the branch network of one of Canada׳s top five banks to gauge the growth potential of individual branches and to provide tailored improvement recommendations. Using various analysis methodologies, the results of each model were examined and their functionality assessed.  相似文献   
29.
PCORnet, the National Patient-Centered Clinical Research Network, seeks to establish a robust national health data network for patient-centered comparative effectiveness research. This article reports the results of a PCORnet survey designed to identify the ethics and regulatory challenges anticipated in network implementation. A 12-item online survey was developed by leadership of the PCORnet Ethics and Regulatory Task Force; responses were collected from the 29 PCORnet networks. The most pressing ethics issues identified related to informed consent, patient engagement, privacy and confidentiality, and data sharing. High priority regulatory issues included IRB coordination, privacy and confidentiality, informed consent, and data sharing. Over 150 IRBs and five different approaches to managing multisite IRB review were identified within PCORnet. Further empirical and scholarly work, as well as practical and policy guidance, is essential if important initiatives that rely on comparative effectiveness research are to move forward.  相似文献   
30.
本文首次将Elastic Net这种用于高度相关变量的惩罚方法用于面板数据的贝叶斯分位数回归,并基于非对称Laplace先验分布推导所有参数的后验分布,进而构建Gibbs抽样。为了验证模型的有效性,本文将面板数据的贝叶斯Elastic Net分位数回归方法(BQR. EN)与面板数据的贝叶斯分位数回归方法(BQR)、面板数据的贝叶斯Lasso分位数回归方法(BLQR)、面板数据的贝叶斯自适应Lasso分位数回归方法(BALQR)进行了多种情形下的全方位比较,结果表明BQR. EN方法适用于具有高度相关性、数据维度很高和尖峰厚尾分布特征的数据。进一步地,本文就BQR. EN方法在不同扰动项假设、不同样本量的情形展开模拟比较,验证了新方法的稳健性和小样本特性。最后,本文选取互联网金融类上市公司经济增加值(EVA)作为实证研究对象,检验新方法在实际问题中的参数估计与变量选择能力,实证结果符合预期。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号